I saw an idea about getting a big LLM (30/44 Gb) running fast in a cloud server.

What if this server would be scalable in potency and the renting shared in a group of united users?

Some sort of DAO to get it started? Personally i would love to link advanced LMS’s up to SD generation etc. And OpenAI is too sensitive for my liking. What do you think?

  • DanIngeniusOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Thanks for your detailed reply, I don’t think crowd sourcing GPUs is feasible or desired but the idea of only using different LORAs is interesting, can the LORAs be loaded separately from the models? Be able to load the model once and use two separate LORAs?

    • georgejrjrjrB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      One base model, dozens maybe hundreds of adapters would be the goal.