Hey self-hosted community 👋

My friend and I have been hacking on SecureAI Tools — an open-source AI tools platform for everyone’s productivity. And we have our very first release 🎉

Here is a quick demo: https://youtu.be/v4vqd2nKYj0

Get started: https://github.com/SecureAI-Tools/SecureAI-Tools#install

Highlights:

  • Local inference: Runs AI models locally. Supports 100+ open-source (and semi open-source) AI models.
  • Built-in authentication: A simple email/password authentication so it can be opened to the internet and accessed from anywhere.
  • Built-in user management: So family members or coworkers can use it as well if desired.
  • Self-hosting optimized A simple we A simple email/password authentication so it can be opened to the internet and accessed from anywhere.
  • Lightweight: A simple web app with SQLite DB to avoid having to run additional DB docker. Data is persisted on the host machine through docker volumes

In the future, we are looking to add support for more AI tools like chat-with-documents, discord bot, and many more. Please let us know if you have any specific ones that you’d like us to build, and we will be happy to add them to our to-do list.

Please give it a go and let us know what you think. We’d love to get your feedback. Feel free to contribute to this project, if you’d like – we welcome contributions :)

We also have a small discord community at https://discord.gg/YTyPGHcYP9 so consider joining it if you’d like to follow along

  • jay-workai-toolsOPB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Hardware requirements:

    • RAM: As much as the AI model requires. Most models have a variant that works well on 8 GB RAM
    • GPU: GPU is recommended but not required. It also runs in CPU-only mode but will be slower on Linux, Windows, and Mac-Intel. On M1/M2/M3 Macs, the inference speed is really good.

    (For some reason, my response to original comment isn’t showing up so reposting here)

  • I_EAT_THE_RICHB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m going to be honest, I’m sick and tired of repackaged, industry standard software that is just an nginx reverse proxy and underpowered authentication system.

    Self hosting is already easy. SSL is easy. LDAP, and SSO are easy. If people actually wanted to help they’d make tutorials instead of opinionated branded tools that aren’t as flexible.

    Just my two cents

    • jay-workai-toolsOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This is a fair point! We are open to integrating SSO. What are some popular SSO providers that the self-hosting community likes to use? I can look into how much effort it would be for us to support the most popular ones

  • Woke_killaB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Does this make sense on a home server? The response time will not take several dozen seconds and the response quality will not be worse than chatgpt? I’m currently using openai api and it’s like lvl 0 for me. So is your project better or worse?

  • niemand112233B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I can’t get it running with my GPU.

    I get this error:

    parsing /root/secure-ai-tools/docker-compose.yml: yaml: line 19: did not find expected key

    This is my .yaml:

    services:
    

    web: image: public.ecr.aws/d8f2p0h3/secure-ai-tools:latest platform: linux/amd64 volumes: - ./web:/app/volume env_file: - .env environment: - INFERENCE_SERVER=http://inference:11434/ ports: - 28669:28669 command: sh -c “cd /app && sh tools/db-migrate-and-seed.sh ${DATABASE_FILE} && node server.js” depends_on: - inference

    inference: image: ollama/ollama:latest volumes: - ./inference:/root/.ollama deploy: resources: reservations: devices: - driver: nvidia count: ‘all’ capabilities: [gpu]