davidmezzettiB to LocalLLaMA@poweruser.forumEnglish · 1 year agoRAG in a couple lines of code with txtai-wikipedia embeddings database + Mistralimagemessage-square15fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1imageRAG in a couple lines of code with txtai-wikipedia embeddings database + MistraldavidmezzettiB to LocalLLaMA@poweruser.forumEnglish · 1 year agomessage-square15fedilink
minus-squaredavidmezzettiOPBlinkfedilinkEnglisharrow-up1·1 year agoIt works with GPTQ models as well, just need to install AutoGPTQ. You would need to replace the LLM pipeline with llama.cpp for it to work with GGUF models. See this page for more: https://huggingface.co/docs/transformers/main_classes/quantization
It works with GPTQ models as well, just need to install AutoGPTQ.
You would need to replace the LLM pipeline with llama.cpp for it to work with GGUF models.
See this page for more: https://huggingface.co/docs/transformers/main_classes/quantization