The question is probably too basic. But how do i load llama2 70B model using 8b quantization? I see TheBlokeLlama2_70B_chat_GPTQ but they only show 3b/4b quantization. I have 80G A100 and try to load llama2 70B model with 8b quantization. Thanks a lot!

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Grab the original (fp16) models. They are quantized to 8-bit on the fly with bitsandbytes.