Yi is a series of LLMs trained from scratch at 01.AI. The models have the same architecture of Llama, making them compatible with all the llama-based ecosystems. Just in November, they released

  • Base 6B and 34B models
  • Models with extended context of up to 200k tokens
  • Today, the Chat models

With the release, they are also releasing 4-bit quantized by AWQ and 8-bit quantized by GPTQ

Things to consider:

  • Llama compatible format, so you can use across a bunch of tools
  • License is not commercial unfortunately, but you can request commercial use and they are quite responsive
  • 34B is an amazing model size for consumer GPUs
  • Yi-34B is at the top of the OS Leaderboard, making it a very strong base model for a chat one
  • AnomalyNexusB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Liking this one - seems particularly good at long form story telling.

    NB you may need to update your software…seems to rely on something pretty recent at least for text gen / llama.cpp. Crashed till I updated (and existing copy was max 48hr old)

    Also, something odd on the template. Suggested template from the gguf seems to be alpaca while bloke model card says chatml. Under both it seems to spit out <|im_end|> occasionally but chatml seems better overall