I’m using mistral-7b to understand LLMs’ procedure.
Does anyone have an idea to improve this process?
do not recommend changing the number of tokens -> 1. :)
I’m using mistral-7b to understand LLMs’ procedure.
Does anyone have an idea to improve this process?
do not recommend changing the number of tokens -> 1. :)
I just wrote a tutorial on how you can scale Mistral-7b to many GPUs in the cloud. I hope this can give you some value. Not sure if you’re looking to do on-demand inference or inference on a bunch of inputs.
https://www.reddit.com/r/LocalLLaMA/comments/17k2x62/i_scaled_mistral_7b_to_200_gpus_in_less_than_5/