I’m fascinated by the whole ecosystem popping up around llama and local LLMs. I’m also curious what everyone here is up to with the models they are running.

Why are you interested in running local models? What are you doing with them?

Secondarily, how are you running your models? Are you truly running them on a local hardware or on a cloud service?

  • fab_spaceB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I’m newbie and i have no GPU at home but i dig into the topic at many layer… experimenting and so on with also paid or free APIs and all home lab tools the dream rush is giving out there and it’s quite engaging…

    after several iterations (2 months full free time spent on that) i’m back to create proper datasets.

    Datasets are the most engaging where any model then can fit flawlessly later on. Data interpretation is the king.

    Please make me wrong at this, yet another chance to learn in the topic.

    In my experiment since i deliver blacklists i’m playing with such flow:

    1. getting data in an ethical way
    2. processing data
    3. train data
    4. rank domains

    the application part can be:

    • browser extension
    • blacklists improvements
    • domain rankings

    the required in this context is a proper made dataset more than the best GPU powered model 💻

    Sorry if this is quite off 🦙

    PS: i’m not dev and i use gpt to code some, the best iteration and lesson learned:

    k now please make this function parallel, up to 32 concurrent

    and i learned concurrent processing of python 🐍🙏