cleverestxBtoLocalLLaMA@poweruser.forum•The Problem with LLMs for chat or roleplayEnglish
1·
1 year agoI have a RTX 4090, 96GB of RAM and a i9-13900k CPU, and I still keep going back to 20b (4-6bpw) models due to the awful performance of 70b models, which 2.4bpw is supposed to fully fit the VRAM in… even using Exllama2…
What is your trick to get better performance? If I don’t use a small lame context of 2048, the speed of generating is actually un-usable (under 1 token/sec), what context are you using and what settings? Thank you.
Why can we get a 20 - 34b version of this very capable Mistral?