Hello friends,

I’m pretty deep into self-hosting - especially on the home automation side. I’ve got a couple of options for self-hosted AI, but I don’t think they’ll meet my long term goals:

  • Coral TPUs: I have 2x processing my Frigate data. These seem fine for that purpose, but not useful for generative AIs?

  • Jetson Nano: Near as I can tell nothing supports these things except DeepStack, which appears to be abandoned. Bummed these haven’t gotten broader support in the community.

I’ve got plenty of rack space and my day job is managing thousands of machines, so not afraid of a more technical setup.

The used NVIDIA rack mounted Tesla GPU servers look interesting. What are y’all using?

Requirements:

  • Rack mounted
  • Supports local LLM and GenAI
  • Linux-based
  • Works with Docker
  • flossraptorB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nvidia is the only game in town right now. I decided on a 3090 for the time being, with the option of adding another one later. I think in two years we will have 100x better options specifically tailored for AI.

  • s3r3ngB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    A 4090 is good enough for running many models. You probably want an A6000 for larger ones. But many models that don’t fit in your VRAM can be scaled down without much loss of effectiveness.