I’ll be honest- my expectation of a 7b q6 model to generate a usable code block would be pretty darn low. I use 7b as a small code snippet analyzer in Continue.dev, but I’m not sure I’d really put a lot of faith in any 7b to do that task. The CodeLlama 34b models are actually pretty decent, though; I use the 34b q8 of CodeFuse, Codebooga and Phind v2 and they are all pretty decent.
I’ll be honest- my expectation of a 7b q6 model to generate a usable code block would be pretty darn low. I use 7b as a small code snippet analyzer in Continue.dev, but I’m not sure I’d really put a lot of faith in any 7b to do that task. The CodeLlama 34b models are actually pretty decent, though; I use the 34b q8 of CodeFuse, Codebooga and Phind v2 and they are all pretty decent.