Just wondering if anyone with more knowledge on server hardware could point me in the direction of getting an 8 channel ddr4 server up and running (Estimated bandwidth speed is around 200gb/s) So I would think it would be plenty for inferencing LLM’s.
I would prefer to go used Server hardware due to price, when comparing the memory amount to getting a bunch of p40’s the power consumption is drastically lower. Im just not sure how fast a slightly older server cpu can process inferencing.

If I was looking to run 80-120gb models would 200gb/s and dual 24 core cpu’s get me 3-5 tokens a second?

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    A big issue for CPU only setups is prompt processing. They’re kind of OK for short chats, but if you give them full context the processing time is miserable. Nowhere close to 5 tok/sec.

    There is one exception: the Xeon Max with HBM. It is not cheap.

    So if you get a server, at least get a small GPU with it to offload prompt processing.

    • fallingdowndizzyvrB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      A big issue for CPU only setups is prompt processing. They’re kind of OK for short chats, but if you give them full context the processing time is miserable. Nowhere close to 5 tok/sec.

      That’s where context shifting comes into play. So the entire context doesn’t have to be reprocessed over and over again. Just the changes.