Hey LocalLLaMA. It’s Higgsfield AI, and we train huge foundational models.

We have a massive GPU cluster and developed our own infrastructure to manage the cluster and train massive models. We constantly lurked in this subreddit and learned a lot from this passionate community. Right now, we have spare GPUs, and we are excited to give back to this incredible community.

We built a simple web app where you can upload your datasets to finetune it. https://higgsfield.ai/

There’s how it works:

  1. You upload the dataset with preconfigured format into HuggingFaсe [1].
  2. Choose your LLM (e.g. LLaMa 70B, Mistral 7B)
  3. Place your submission into the queue
  4. Wait for it to get trained.
  5. Then you get your trained model there on HuggingFace.

[1]: https://github.com/higgsfield-ai/higgsfield/tree/main/tutorials

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Well, awesome. Thanks.

    I’ll be over here assembling some TV show transcripts for a fandom tune.

    Out of curiosity, is it a full finetune or a LoRA? What context length?

  • herozorroB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    please do something like this, or provide detailed example, on how an open source framework api can be added to a coder LLM.

    how do we prepare the data with code sample, docs, so the coder LLM learns it can can do code completions and answer documentation?

    • RiskApprehensive9770OPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You can train on any dataset as long as it follows our format.

      Soon we’ll publish a video tutorial.

      • herozorroB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        but what would be the proper formatting example for code? just paste in a bunch of files from a repo? or should be more a cheatsheet format?