Title really - right now the leader of OpenAI is blurry, which isn’t good for the industry to follow.
We need a strong leader to focus our efforts, but if this isn’t the case - perhaps it’ll get chaotic?
The sad truth is that even if such a thing as AGI can ever exist, we won’t ever see it in any of our lifetimes, it’s prob the conversation for many decades and centuries in the future, and where we are now is pretty much the nascency of widespread, conscious AI use among the masses. Of course, we’ve all been using AI for years but with these GPT-connected chatbots it’s become much more common and active a decision to use AI than it ever was before. ChatGPT is the tip of the iceberg, we don’t even know if the true best model going forward is going to remain the transformer (and it’s not the best at everything, just the most “generalizable” at the moment as far as I’m aware). What I’m wondering is how advanced the societies of the distant future will be, where they look down on us and the state of our relatively primitive AI as we do monkeys with their great stone nutcrackers
Shouldn’t we be seriously considering not attempting AGI? Other than the general philosophical and ethical considerations, achieving AGI is a near surefire way to ensure that most of us are out of a job.
OpenAI had a nice public headline, but in terms of AI research, they were far from the only ones doing it.