I like 7b but 13b like orca2 are better, no? What is the best?
It is whatever new model that gets announced here with the most upvotes on said post
Slightly off-topic – I’ve been testing 13b and 7b models for awhile now… and I’m really interested if people have a good one to check out, because at least for now, I’ve settled on a 7b model that seems to work better than most other 13b models I’ve tried.
Specifically, I’ve been using OpenChat 3.5 7b (Q8 and Q4) and it’s been really good for my work so far, and punching much higher than it’s current weight class… – Much better than any of the 13b models I’ve tried. (I’m not doing any specific tests, it just seems to understand what I want better than others I’ve tried. – I’m not doing any function calling but even the 4bit 7b model is able to generate JSON as well as respond coherently.)
Note: specically using the original (non-16k) models; the 16k models seem to be borked or something?
I feel like this and similar questions like this should be revived monthly.
since Mistral release there are (almost) no 13B models better than Mistral finetunes, and this can be seen on Open LLM Leaderboard: it is Qwen-14B and second is a Mistral finetune intel/neural-chat, and Orca-13B comes 6th
If the 13 is not fixed, it should be a fine tune of qwen-14b, but there are almost none. There is also CausalLM-14b