I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.
Why are you interested in running local models? What are you doing with them?
Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?
Trying to get a better understanding of how prompts work in relation to fine-tunes, and trying to see if any of them are actually reliable enough to be used in a “production” type environment.
My end goals are basically
A lot of it comes down to just wanting to learn, but a big piece of it is that I have consistency, stability and privacy when running an LLM at home.
As for how I run it? Ho ho ho… a bit overkill, since as a developer I have a lot of hardware available to me.
I usually connect the mistral to Continue.Dev in Visual Studio code.
Love your post and ambitions, very inspiring. Looking to do similar, with family engaging assistant connecting to home automation and private data. Look forward to seeing more on what you build, anywhere in particular you share aside from here?
least wealthy /r/LocalLLaMa user