Just wondering if anyone with more knowledge on server hardware could point me in the direction of getting an 8 channel ddr4 server up and running (Estimated bandwidth speed is around 200gb/s) So I would think it would be plenty for inferencing LLM’s.
I would prefer to go used Server hardware due to price, when comparing the memory amount to getting a bunch of p40’s the power consumption is drastically lower. Im just not sure how fast a slightly older server cpu can process inferencing.

If I was looking to run 80-120gb models would 200gb/s and dual 24 core cpu’s get me 3-5 tokens a second?

  • fallingdowndizzyvrB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Of course, during prompt processing, you’ll be bottlenecked by the CPU speed

    Context shifting will help with that.