I want to use LLMs to automate analysing data and use it to provide insights to my users, but often times I notice insights being generated on factually incorrect data. I tried fine tuning my prompts, the structure in which I pass data to LLM, few shot learning but there still some chance of it to hallucinate. How can I create a production ready application where this insights are surfaced to end users and presenting incorrect insights is not accepted? I am out of ideas. Any guidance is appreciated 🙏🏻

  • software-n-erdOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I guess people just want to learn. If you think this isn’t the right approach just say it :)