I saw an idea about getting a big LLM (30/44 Gb) running fast in a cloud server.

What if this server would be scalable in potency and the renting shared in a group of united users?

Some sort of DAO to get it started? Personally i would love to link advanced LMS’s up to SD generation etc. And OpenAI is too sensitive for my liking. What do you think?

  • georgejrjrjrB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    One base model, dozens maybe hundreds of adapters would be the goal.