Yeah you make a really good point there! I was perhaps thinking too simplistically and scaling from my personal experience with playing around on my home machine.
Although realistically, it seems the situation is pretty bad because freaky-giant-mega-computers are both training models AND answering countless silly queries per second. So at scale it sucks all around.
Minus the terrible fad-device-cycle manufacturing aspect, if they’re really sticking to their guns on pushing this LLM madness, do you think this wave of onboard “Ai chips” will make any impact on lessening natural resource usage at scale?
(Also offtopic but I wonder how much a sweet juicy exploit target these “ai modules” will turn out to be.)
Let’s be honest, corporate was just mad they didn’t think of it sooner! :p