openchat 3.5 16k

  • hibbityB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I would but anyone that puts that much effort into a model release and doesn’t include the trained prompt formats just seems like they must not want me to use it.

    • perlthoughtsOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah I agree, its kind of weird, but you dont have to use GPT4 Correct User: etc, GPT4 User: works better imo. However, this is just the prompt they used when training the model, so its best to follow it.

      • hibbityB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I would be stoked and actually mess with it if it had a proper instruct or system tag. The results from models trained like that are just easier to tune.

  • luncherooB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just a quick note for anyone using LM Studio who doesn’t want to fiddle too much–the Codellama OpenAssistant preset works fine without ask/answer loops.

  • paryska99B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I know these benchmarks are a tough topic, but this on paper looks really impressive. It states to be better than mistral and I loved the progress mistral brought. If someone tries this model out can you give feedback under this post? Much appreciated

    • perlthoughtsOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      No, nurtureai and openchat are not affiliated. NurtureAI just extended the context, it looks like another guy did a openchat 16k merge of some models as well.