Thanks for this. I’ve only worked with RAG on OpenAI models and there’s a lot of prompt finetuning needed to get decent results. A KG helps define the semantic elements and relationships between document fragments and the user query for RAG.
That said, I’m still relying on the vector database to do most of the heavy lifting of filtering relevant results before feeding them into an LLM. Having an LLM clean up or summarize the user query and create a KG from the vector database’s response could lead to more accurate answers.
Having an LLM clean up or summarize the user query and create a KG from the vector database’s response could lead to more accurate answers.
That is the promise. Of course, you still need to figure out for your app domain if doing a concept-level, chunk level, or some in-between option like CSKG is the right application.
One thing I find helpful with prompt design is to spend less attention on writing instructions, replacing them with specific examples instead. This replaces word-smithing with in-context learning samples. You build up the examples iteratively, running the same prompt through more text, fixing it and adding onto the example list… until you reach your context budget for the system prompt.
Yeah, that’s what I do too. Example input and JSON key output, for example. The example idea also works with calculations: instead of telling the LLM each calculation step, use real numbers and show the result of each step in sequence.
Sometimes vector search gets inaccurate results with really short queries, those with misspellings or SMS-speak. I find it helps to get an LLM to expand and correct a query before creating an embedding vector out of it.
Thanks for this. I’ve only worked with RAG on OpenAI models and there’s a lot of prompt finetuning needed to get decent results. A KG helps define the semantic elements and relationships between document fragments and the user query for RAG.
That said, I’m still relying on the vector database to do most of the heavy lifting of filtering relevant results before feeding them into an LLM. Having an LLM clean up or summarize the user query and create a KG from the vector database’s response could lead to more accurate answers.
That is the promise. Of course, you still need to figure out for your app domain if doing a concept-level, chunk level, or some in-between option like CSKG is the right application.
One thing I find helpful with prompt design is to spend less attention on writing instructions, replacing them with specific examples instead. This replaces word-smithing with in-context learning samples. You build up the examples iteratively, running the same prompt through more text, fixing it and adding onto the example list… until you reach your context budget for the system prompt.
Yeah, that’s what I do too. Example input and JSON key output, for example. The example idea also works with calculations: instead of telling the LLM each calculation step, use real numbers and show the result of each step in sequence.
Sometimes vector search gets inaccurate results with really short queries, those with misspellings or SMS-speak. I find it helps to get an LLM to expand and correct a query before creating an embedding vector out of it.