The problem is these devices don’t have the hardware to process input locally — it’s all sent back to their respective clouds for processing.
I believe Siri on newer phones can do some processing entirely local, but it’s not the norm.
I would see which Intel GPUs are supported by Plex and choose a NUC with one of those.
This is the way. HA is what really got me thinking about self-hosting more seriously — I realized Google and Amazon likely knew what room I was in and when I was in it. That was enough to go down the rabbit hole.
I was pretty deep into self-hosting before spinning up the much vaunted *arr stack — because I saw YouTube TV had gone up to $83/mo. 🏴☠️
Just ran into this myself. I fixed it by disabling auth for my local network in the config file. IMPORTANT: Stop the container first because it overwrites config files on shutdown.
I was then able to log in and set the password. After that the login worked fine.
Yeah, this is what I do - all containers with Ansible to manage it all. I would not recommend containerizing HASS though.
Hey friend, you’re aiming for a setup very close to what I run. Some lessons from my fumbling:
Given your low power consumption requirements, I’d probably look at something like this:
If you want to do any AI/ML beyond Frigate, you’ll want a desktop GPU in the rack. I still haven’t found a good option here. I’ll likely get a rack case that works with desktop hardware and throw a 3090 into it.
Did you check out the Loki Docker plugin for the daemon? That worked like a charm for me.
Promtail will grab host level logs as well.
DM if you’re comfortable with Ansible; I have the whole stack (host + Docker services) automated and can share.
Much appreciated — I think the rack mounted desktop GPU approach is best for now. Another commenter suggested we should see better options in 1-2 years and I strongly suspect they’re correct.
I totally agree on the Coral TPUs. Great for Frigate, but not much else. I’ve got 2x of the USB ones cranking on half a dozen 4K stream - works wonderfully.
And I agree in theory these Nanos should be great for all sorts of stuff, but nothing supports them. Everything I’ve seen is custom one offs outside of DeepStake (though CodeProject.AI purports there’s someone now working on a Nano port).
Sounding like a decent gaming GPU and a 2-3U box is the ticket here.
I just recently saw this pop up in my feeds. Gonna give it a go when I get a chance. Looks very cool!
If you use Firefox, you can host your own sync server.