The question is probably too basic. But how do i load llama2 70B model using 8b quantization? I see TheBlokeLlama2_70B_chat_GPTQ but they only show 3b/4b quantization. I have 80G A100 and try to load llama2 70B model with 8b quantization. Thanks a lot!
thanks!