Hello!

By popular demand I am planning a fine-tune of https://huggingface.co/dreamgen/opus-v0-7b on top of Yi-34B and wonder whether to use the 200K as the base.

The regular Yi-34B seems slightly better than Yi-34B-200K on standard benchmarks, but I wonder how it “feels” and whether the loss of performance on short context is worth it, given that the regular version can be used up to 32K tokens.

(Yi-34B vs Yi-34B-200K)

Did anyone try an analysis of these 2 models on various sequence lengths (<4K, <8K, <16K, etc.)?

  • BlueMetaMindB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It sounds rather like it trained on chatGPT output and they didn’t curate it enough to delete those “As a large language model trained by openAI…” category statements.

    It’s kinda like Shutterstock watermarks showing up in image generation.

    • dogesatorB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yea I’m saying that ChatGPT outputs are contained on internet posts in the year 2023, so simply training from 2023 internet data would end up with training on ChatGPT data as a side effect.

      • BlueMetaMindB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Yes, I understood you. My claim differs in that I think they DIRECTLY used a lot of GPT4 output through the api, which is very probable because a lot of LLM training is done that way. You ask GPT4 to generate examples of conversations with properties you want your LLM to learn and then train on that.

        In order for self identification, as GPT I don’t think that randomly crawled chat Examples from the Internet would be enough.

        I am not trying to make a strong claim on that, it’s just a thought. My people both.