Jugg3rnautB to LocalLLaMA@poweruser.forumEnglish · 1 year agoGPU-over-IP for LLM inference?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareGPU-over-IP for LLM inference?plus-squareJugg3rnautB to LocalLLaMA@poweruser.forumEnglish · 1 year agomessage-square1fedilink
Jugg3rnautB to LocalLLaMA@poweruser.forumEnglish · 1 year agoChassis only has space for 1 GPU - Llama 2 70b possible on a budget?plus-squaremessage-squaremessage-square8fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareChassis only has space for 1 GPU - Llama 2 70b possible on a budget?plus-squareJugg3rnautB to LocalLLaMA@poweruser.forumEnglish · 1 year agomessage-square8fedilink