What inference engine did you use? It’s possibly a bug as these things tend to happen with the new models.
I for one can’t wait for the lookahead decoding in llamacpp and others, combine that with some smaller models and we’ll have blazing fast speeds on pennies worth of hardware from what i recon.
You are probably right about it being a bug as at first I couldn’t get the model to work at all (it crashed koboldccp when loading up) but it was just because I had a week old version of koboldccp. I needed to download the version that came out like 4 days ago (at that time) ha! Then it loaded up fine but with that already mentioned quirk. I guess it will get fixed in short time.
Yeah the future of local LLM’s lies in the smaller models for sure!
Thanks for the input.
What inference engine did you use? It’s possibly a bug as these things tend to happen with the new models.
I for one can’t wait for the lookahead decoding in llamacpp and others, combine that with some smaller models and we’ll have blazing fast speeds on pennies worth of hardware from what i recon.
I use koboldccp.
You are probably right about it being a bug as at first I couldn’t get the model to work at all (it crashed koboldccp when loading up) but it was just because I had a week old version of koboldccp. I needed to download the version that came out like 4 days ago (at that time) ha! Then it loaded up fine but with that already mentioned quirk. I guess it will get fixed in short time.
Yeah the future of local LLM’s lies in the smaller models for sure!