I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.
Why are you interested in running local models? What are you doing with them?
Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?
I’m newbie and i have no GPU at home but i dig into the topic at many layer… experimenting and so on with also paid or free APIs and all home lab tools the dream rush is giving out there and it’s quite engaging…
after several iterations (2 months full free time spent on that) i’m back to create proper datasets.
Datasets are the most engaging where any model then can fit flawlessly later on. Data interpretation is the king.
Please make me wrong at this, yet another chance to learn in the topic.
In my experiment since i deliver blacklists i’m playing with such flow:
the application part can be:
the required in this context is a proper made dataset more than the best GPU powered model 💻
Sorry if this is quite off 🦙
PS: i’m not dev and i use gpt to code some, the best iteration and lesson learned:
k now please make this function parallel, up to 32 concurrent
and i learned concurrent processing of python 🐍🙏