Hey self-hosted community 👋
My friend and I have been hacking on SecureAI Tools — an open-source AI tools platform for everyone’s productivity. And we have our very first release 🎉
Here is a quick demo: https://youtu.be/v4vqd2nKYj0
Get started: https://github.com/SecureAI-Tools/SecureAI-Tools#install
Highlights:
- Local inference: Runs AI models locally. Supports 100+ open-source (and semi open-source) AI models.
- Built-in authentication: A simple email/password authentication so it can be opened to the internet and accessed from anywhere.
- Built-in user management: So family members or coworkers can use it as well if desired.
- Self-hosting optimized A simple we A simple email/password authentication so it can be opened to the internet and accessed from anywhere.
- Lightweight: A simple web app with SQLite DB to avoid having to run additional DB docker. Data is persisted on the host machine through docker volumes
In the future, we are looking to add support for more AI tools like chat-with-documents, discord bot, and many more. Please let us know if you have any specific ones that you’d like us to build, and we will be happy to add them to our to-do list.
Please give it a go and let us know what you think. We’d love to get your feedback. Feel free to contribute to this project, if you’d like – we welcome contributions :)
We also have a small discord community at https://discord.gg/YTyPGHcYP9 so consider joining it if you’d like to follow along
We use Ollama as the inference engine and AFAIK Ollama doesn’t yet support AMD GPUs out of the box.
Ollama uses llama.cpp under the hood and there appears to be a way to compile llama.cpp to work with AMD GPUs: https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/