Eric Hartford, the author of dolphin models, released dolphin-2.2-yi-34b.
This is one of the earliest community finetunes of the yi-34B.
yi-34B was developed by a Chinese company and they claim sota performance that are on par with gpt-3.5
HF: https://huggingface.co/ehartford/dolphin-2_2-yi-34b
Announcement: https://x.com/erhartford/status/1723940171991663088?s=20
I took a short break from my 70B tests (still working on that!) and tried TheBloke/dolphin-2_2-yi-34b-GGUF Q4_0. It instantly claimed 4th place on my list.
A 34B taking 4th place among the 13 best 70Bs! A 34B model that beats 9 70Bs (including dolphin-2.2-70B, Samantha-1.11-70B, StellarBright, Airoboros-L2-70B-3.1.2 and many others). A 34B with 16K native context!
Yeah, I’m just a little excited. I see a lot of potential with the Yi series of models and proper finetunes like Eric’s.
Haven’t done the RP tests yet, so back to testing. Will report back once I’m done with the current batch (70Bs take so damn long, and 120B even more so).
Wow i gotta try it thanks for the hype! Does the GPTQ/AWQ versions differ from GGUF in terms of context? It listed that the context is only 4096