GPT-4 surprisingly excels at Googling (Binging?) to retrieve up-to-date information about current issues. Tools like Perplexity.ai are impressive. Now that we have a highly capable smaller-scale model, I feel like not enough open-source research is being directed towards enabling local models to perform internet searches and retrieve online information.
Did you manage to add that functionality to your local setup, or know some good repo/resources to do so?
Langchain has serpai plugins…but that’s more one shot type questions than a convo
Also the free limit on serpai is pretty low
I have. You simply parse the prompt for a url and then write a handler to retrieve the page content using whatever language or framework you use. Then you clean it up and send the content along with the prompt to the LLM and do QA over it.
I started exploring llms cus I wanted to have one that would tell me the top news story or top Reddit post in my feed at random times during the day. I work from home and it would be like having a coworker.
IE “Hey did you hear what happening at OpenAi?”
There are 3 options that I have found, they all work.
- TextGenerationWebui - web_search extension (there is also a DuckDuckGO clone in github)
- LolLLMs - There is an Internet persona which do the same, searches the web locally and gives it as context
- Chat-UI by huggingface - It is also a great option as it is very fast (5-10 secs) and shows all of his sources, great UI (they added the ability to search locally and run LLMS models locally recently)
GitHub - simbake/web_search: web search extension for text-generation-webui
GitHub - ParisNeo/lollms-webui: Lord of Large Language Models Web User Interface
GitHub - huggingface/chat-ui: Open source codebase powering the HuggingChat app
If you ask me, try all 3 of them!