Given the unexpected advances in AI over the past few years I wouldn’t be so sure that AGI is a long way off, and I certainly see no reason to expect that there will never be AGI. Whenever there’s an example of something already existing in nature it’s a good bet that we’ll be able to copy how it works using technology at some point.
I guess, it depends on what we’re referring to as “it”. If we’re talking about the LLM-based ChatGPT, that’s true pretty much by definition. But if we’re talking about “AI”, which is a word that has been used for everything from calculators to Skynet, then yeah, at some point AI will be AGI.
Well, if humanity doesn’t obliterate itself in the coming decades, which isn’t looking too good.
Well the good news is: it’s not agi, will never be agi and agi is a long way off.
The bad news is: Open AI doesn’t publicly say this: they’re just firing the safety team with no explanation.
That’s contradictory.
Given the unexpected advances in AI over the past few years I wouldn’t be so sure that AGI is a long way off, and I certainly see no reason to expect that there will never be AGI. Whenever there’s an example of something already existing in nature it’s a good bet that we’ll be able to copy how it works using technology at some point.
I guess, it depends on what we’re referring to as “it”. If we’re talking about the LLM-based ChatGPT, that’s true pretty much by definition. But if we’re talking about “AI”, which is a word that has been used for everything from calculators to Skynet, then yeah, at some point AI will be AGI.
Well, if humanity doesn’t obliterate itself in the coming decades, which isn’t looking too good.
Nobody specified LLMs until this comment and OpenAI does more than just LLM research, so “it” should be assumed to be AI in general.
OpenAI is working on LLMs. LLMs will never be AGI
They’re working on AI. LLMs are only one particular type of AI.