So I have the text-generation-ui by oogabooga running at one place, then I also have stable diffusion in the other tab. But I’m looking for ways to expose these project’s APIs, and then combine them to then produce output like what GPT-4 does, where it can call APIs when it needs to, to other models.

I’m also looking for a solution where the text generation output is also able to execute the said code, and then infer from its results to do next things. (iknow the risks but yeah).

  • DanIngeniusB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is something I’m interested in working on, i want to crowd fund a good LLM + SD + TTSvoice host, DM me if you are interested in taking part!

    • StarkboyOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks for your answer! I get it. These projects do give me some ideas. I didn’t know such things are called ‘agents’ in this space

  • LyPretoB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    you have all the APIs whats stopping you from putting something like this together? personally for me the only challenge is finding projects compatible with M1 that offer Metal offloading— but for linux it should be relatively straightforward to implement