DanIngeniusOPBtoLocalLLaMA@poweruser.forum•Proposal of LLM hosted in a co-funded hostEnglish
1·
11 months agoThanks for your detailed reply, I don’t think crowd sourcing GPUs is feasible or desired but the idea of only using different LORAs is interesting, can the LORAs be loaded separately from the models? Be able to load the model once and use two separate LORAs?
I like the idea, i think it’s similar to something I’m already discussing with some other people, dm me if you want and I’ll introduce you