Are there any tricks to speed up 13B models on a 3090?
Currently using the regular huggingface model quantized to 8bit by a GPTQ capable fork of KoboldAI.
Especially when the context limit changes, it’s pretty slow and far from even remotely real time.
I’m now using a 4bit GPTQ version of the same model. After generation completes the VRAM goes up to 16.2 GB (out of 24 GB) and I have nothing else using GPU as best I can tell (no browser windows with youtube, etc).
Still only getting a bit under 4.00 tokens per second. So I don’t think stuff is getting offloaded to CPU.