minus-squareHoppssBtoLocalLLaMA@poweruser.forum•Hardware question: combining a 3090 and a p40linkfedilinkEnglisharrow-up1·1 year agoThis is not true, I have split two separate LLM models partially across a 4090 and a 3080 and have had them both run inference at the same time. This can be done in oobabooga’s repo with just a little tinkering. linkfedilink
HoppssB to Gaming@level-up.zoneEnglish · 1 year agoSteam review summarizer for those that use chatgptplus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareSteam review summarizer for those that use chatgptplus-squareHoppssB to Gaming@level-up.zoneEnglish · 1 year agomessage-square2fedilink
This is not true, I have split two separate LLM models partially across a 4090 and a 3080 and have had them both run inference at the same time.
This can be done in oobabooga’s repo with just a little tinkering.