Just wondering if anyone with more knowledge on server hardware could point me in the direction of getting an 8 channel ddr4 server up and running (Estimated bandwidth speed is around 200gb/s) So I would think it would be plenty for inferencing LLM’s.
I would prefer to go used Server hardware due to price, when comparing the memory amount to getting a bunch of p40’s the power consumption is drastically lower. Im just not sure how fast a slightly older server cpu can process inferencing.

If I was looking to run 80-120gb models would 200gb/s and dual 24 core cpu’s get me 3-5 tokens a second?

  • AaaaaaaaaeeeeeB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No way, you’re that one guy I uploaded the f16 airoboros for ! I was hoping you’d get the model and I think you did it :)