Hi. I am not behind the model in any capacity nor those who are asked me to do so, before anyone asks.
I am just a normal LLM enjoyer that wants better 13B models in the near future, because at the moment, they’re being plummeted onto the ground by many Mistral 7B finetunes and since we don’t have any Mistral 13B base model…
The Model in question is this one right here, which seems to be flying under the radar for some reason:
https://huggingface.co/sequelbox/DaringFortitude
TheBloke already did his magic on it, just search his profile on Hugging Face with Ctrl+F.
The reason as to why I am doing this is: I honestly think this is a really, really good (I did some little testing, but my machine is garbage to test any further) and useful Base Model for further finetuning/merging and etc…
What have you found it useful for? The model card is pretty vague.
53 GB?
it’s in FP32 rather than FP16
its model average on the openllm leaderboard is 51.
Really nice, I had a dreamz we need to find a way to iterate over base models so every finetune is closer to sota :D
There is very little info.
It seems to be instruction finetuned, but what template? ChatML? There is no mention of anything. Posting it this way is pretty bad.
I really wonder who this TheBlok is. What a legend.