I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.
Why are you interested in running local models? What are you doing with them?
Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?
I’m newbie and i have no GPU at home but i dig into the topic at many layer… experimenting and so on with also paid or free APIs and all home lab tools the dream rush is giving out there and it’s quite engaging…
after several iterations (2 months full free time spent on that) i’m back to create proper datasets.
Datasets are the most engaging where any model then can fit flawlessly later on. Data interpretation is the king.
Please make me wrong at this, yet another chance to learn in the topic.
In my experiment since i deliver blacklists i’m playing with such flow:
- getting data in an ethical way
- processing data
- train data
- rank domains
the application part can be:
- browser extension
- blacklists improvements
- domain rankings
the required in this context is a proper made dataset more than the best GPU powered model 💻
Sorry if this is quite off 🦙
PS: i’m not dev and i use gpt to code some, the best iteration and lesson learned:
k now please make this function parallel, up to 32 concurrent
and i learned concurrent processing of python 🐍🙏
Trying to get a better understanding of how prompts work in relation to fine-tunes, and trying to see if any of them are actually reliable enough to be used in a “production” type environment.
My end goals are basically
- A reliable AI assistant that I know is safe, secure and private. Any information about myself, my household or my proprietary ideas won’t be saved on some company’s server to be reviewed and trained upon. I don’t want to ask sensitive questions about stuff like taxes or healthcare or whatnot, just to have some person review it and it end up in a model
- Eventually create a fine-tuned coding model for the languages I care about. Right now they’re all python, and ChatGPT is ok but they keep accidentally breaking it while trying to put up more guardrails against people doing crazy stuff. One day it’s great at javascript, the next it’s terribad. I need consistency, and I’ve realized that with proprietary models I don’t get that. A model in my home? I do.
- Eventually create an IoT service across my home that is managed (with tight constraints) by an AI. Lots of guardrails. I don’t trust generative AI to not set my thermostat to 150 degrees lol.
- Tinker with these things while they’re still new so that I can know how it works under the hood, so that when AI becomes more mainstream I’ll have a leg up, since my field (development) feels like it’s right there with artists on the chopping block when AI gets better lol
- I’m putting together some guides and tutorials to help others get into open source AI too. The more folks who can tinker with it, the better.
- Finally, I’m creating an AI assistant prompt card that will make one who won’t lie to me/hallucinate as much, and will speak in a more natural language while still having the knowledge it needs to answer questions well for me. I’m trying model after model looking for the right one that will help accomplish this. So far, XWin 70b using Vicuna instruction templates has been fantastic for this.
A lot of it comes down to just wanting to learn, but a big piece of it is that I have consistency, stability and privacy when running an LLM at home.
As for how I run it? Ho ho ho… a bit overkill, since as a developer I have a lot of hardware available to me.
- M2 Ultra Mac Studio 192GB- main inference machine. It has 147GB of VRAM available. This acts as a headless server that I connect to on any device in my house. My main AI assistant runs off of this
- My main desktop is an RTX 4090 machine windows box, so I run phind-codellama on it most of the time. If I need to extend the context window then I swap the M2 Ultra to phind so I can do 100,000 token context… but otherwise its so darn fast on the 4090 running q4 that I use that mostly.
- A macbook pro that runs a little Mistral 7b on it. It also acts a server when I’m not on it, allowing my windows machine to have all 3 models running at once.
I usually connect the mistral to Continue.Dev in Visual Studio code.
least wealthy /r/LocalLLaMa user
Love your post and ambitions, very inspiring. Looking to do similar, with family engaging assistant connecting to home automation and private data. Look forward to seeing more on what you build, anywhere in particular you share aside from here?
mostly asking it perverted questions with sexual overtones
I don’t know why you’re getting downvoted. By my best reckoning, about two-thirds of this sub’s regulars use LLM inference for smut.
It’s not one of my use-cases, but to each their own, and it’s undeniably helping advance the state of the art (much as the online porn industry helped advance web development).