I already tried to set up fastchat-t5 on a digitalocean virtual server with 32 GB Ram and 4 vCPUs for $160/month with CPU interference. The performance was horrible. Answers took about 5 seconds for the first token and then 1 word per second.
Any ideas how to host a small LLM like fastchat-t5 economically?
3 ideas
fastchat-t5 is a 3B model on bfloat16, that means it needs at least at least 3B x 16bits ~ 6GB RAM only for the model itself, and 2K tokens limit for the context (for both prompt and answer),
a quick way to speed up is to use a quantized version:
8bit quant, with almost no quality lost, like https://huggingface.co/limcheekin/fastchat-t5-3b-ct2,
you will get a 2x smaller file and 2x faster inference,
but better read #2 :)
a Mistral finetune like https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF, wich is 7B, quantized to 4bits, will have ~ the same size as 8bit fastchat-t5,
but a superior performance as it was most probably trained on more tokens than llama2 (~2T tokens), and flan-t5 (base model of the fastchat-t5) was only on 1T,
explanation why a larger model quantized is better than a smaller one even not quantized is explained here https://github.com/ggerganov/llama.cpp/pull/1684
https://preview.redd.it/54x2ff87gk0c1.png?width=839&format=png&auto=webp&s=dae1d27376c9c858935c285dd765246af79a86a4
Wow thanks, thats really an in-depth comment I will try what you say!