So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?
So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?
I’m guessing GQA helped. Llama2 70b and 34b used Grouped Query Attention but it wasn’t used for Llama2 7/13b.
https://preview.redd.it/je2q9vhllq0c1.png?width=871&format=png&auto=webp&s=d23b1cdd307dfa54fb4dd788a0f6ea90ee23fa94
Knowledge is a strange goal for any model when we have the internet. IMO. Just connect your model to a web search.