I plan to infer 33B models at full precision, 70B is second priority but a nice touch. Would I be better off getting an AMD EPYC server cpu like this or a RTX 4090? With the EPYC, i am able to get 384GB DDR4 RAM for ~400USD on ebay, the 4090 only has 24GB. Moreover, both the 4090 and EPYC setup + ram cost about the same. which would be a better buy?

  • fallingdowndizzyvrB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I plan to infer 33B models at full precision

    At full precision as in FP16, you are not going to be able to fit ot in a 4090. So if that’s your goal, between the choices you are giving, there is only 1 choice. But it won’t be speedy.

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you must run at high precision… the best system in that budget is probably a compromise?

    Grab a 3090 or 3060 and slap it on the most RAM bandwidth you can get, with a more modest CPU. The GPU will offload prompt processing and enough response layers to help.

  • multiverse_fanB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If I had the money, I’d go with the cpu.

    Also, I’m not sure a 4090 could run 33B modes at full precision. Wouldn’t that require like 70GB of vRAM?

  • easyllaamaB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    My AMD 7950X3D ( 16 core 32 threads), 64GB DDR5, Single RTX 4090 on 13B Xwin GGUF q8 can run at 45T/S. With exllamav2, 2x 4090 can run 70B q4 at 15T/s. Motherboard is Asus Pro Art AM5. In Local LLama, I think you can run similar speed with RTX 3090s. But in SD, 4090 is 70% better though.