I believe they were implying that a lot of the people who say “it’s not real AI it’s just an LLM” are simply parroting what they’ve heard.
Which is a fair point, because AI has never meant “general AI”, it’s an umbrella term for a wide variety of intelligence like tasks as performed by computers.
Autocorrect on your phone is a type of AI, because it compares what words you type against a database of known words, compares what you typed to those words via a “typo distance”, and adds new words to it’s database when you overrule it so it doesn’t make the same mistake.
It’s like saying a motorcycle isn’t a real vehicle because a real vehicle has two wings, a roof, and flies through the air filled with hundreds of people.
Which is a fair point, because AI has never meant “general AI”, it’s an umbrella term for a wide variety of intelligence like tasks as performed by computers.
Do you mean in the everyday sense or the academic sense? I think this is why there’s such grumbling around the topic. Academically speaking that may be correct, but I think for the general public, AI has been more muddled and presented in a much more robust, general AI way, especially in fiction. Look at any number of scifi movies featuring forms of AI, whether it’s the movie literally named AI or Terminator or Blade Runner or more recently Ex Machina.
Each of these technically may be presenting general AI, but for the public, it’s just AI. In a weird way, this discussion is sort of an inversion of what one usually sees between academics and the public. Generally academics are trying to get the public not to use technical terms loosely, yet here some of the public is trying to get some of the tech/academic sphere to not, at least as they think, use technical terms loosely.
Arguably it’s from a misunderstanding, but if anyone should understand the dynamics of language, you’d hope it would be those trying to calibrate machines to process language.
Well, that’s the issue at the heart of it I think.
How much should we cater our choice of words to those who know the least?
I’m not an academic, and I don’t work with AI, but I do work with computers and I know the distinction between AI and general AI.
I have a little irritation at the theme, given I work in the security industry and it’s now difficult to use the more common abbreviation for cryptography without getting Bitcoin mixed up in everything.
All that aside, the point is that people talking about how it’s not “real AI” often come across as people who don’t know what they’re talking about, which was the point of the image.
All that aside, the point is that people talking about how it’s not “real AI” often come across as people who don’t know what they’re talking about, which was the point of the image.
The funny part is, as I mention in my comment, isn’t that how both parties to these conversations feel? The problem is they’re talking past each other, but the worst part is, arguably the more educated participant should be more apt to recognize this and clarify or better yet, ask for clarification so they can see where the disconnect is emerging to improve communication.
Also, let’s remember that it’s not the laypeople describing the technology in general personified terms like “learning” or “hallucinating”, which furthers some of the grumbling.
Well, I don’t generally expect an academic level of discourse out of image macros found on the Internet.
Usually when I see people talking about it, I do see people making clarifying comments and asking questions like you describe. Sorta like when I described how AI is an umbrella term.
I’m not sure I’d say that learning and hallucinating are personified terms. We see both of those out of any organism complex enough to have something that works like a nervous system, for example.
I believe they were implying that a lot of the people who say “it’s not real AI it’s just an LLM” are simply parroting what they’ve heard.
Which is a fair point, because AI has never meant “general AI”, it’s an umbrella term for a wide variety of intelligence like tasks as performed by computers.
Autocorrect on your phone is a type of AI, because it compares what words you type against a database of known words, compares what you typed to those words via a “typo distance”, and adds new words to it’s database when you overrule it so it doesn’t make the same mistake.
It’s like saying a motorcycle isn’t a real vehicle because a real vehicle has two wings, a roof, and flies through the air filled with hundreds of people.
Do you mean in the everyday sense or the academic sense? I think this is why there’s such grumbling around the topic. Academically speaking that may be correct, but I think for the general public, AI has been more muddled and presented in a much more robust, general AI way, especially in fiction. Look at any number of scifi movies featuring forms of AI, whether it’s the movie literally named AI or Terminator or Blade Runner or more recently Ex Machina.
Each of these technically may be presenting general AI, but for the public, it’s just AI. In a weird way, this discussion is sort of an inversion of what one usually sees between academics and the public. Generally academics are trying to get the public not to use technical terms loosely, yet here some of the public is trying to get some of the tech/academic sphere to not, at least as they think, use technical terms loosely.
Arguably it’s from a misunderstanding, but if anyone should understand the dynamics of language, you’d hope it would be those trying to calibrate machines to process language.
Well, that’s the issue at the heart of it I think.
How much should we cater our choice of words to those who know the least?
I’m not an academic, and I don’t work with AI, but I do work with computers and I know the distinction between AI and general AI.
I have a little irritation at the theme, given I work in the security industry and it’s now difficult to use the more common abbreviation for cryptography without getting Bitcoin mixed up in everything.
All that aside, the point is that people talking about how it’s not “real AI” often come across as people who don’t know what they’re talking about, which was the point of the image.
The funny part is, as I mention in my comment, isn’t that how both parties to these conversations feel? The problem is they’re talking past each other, but the worst part is, arguably the more educated participant should be more apt to recognize this and clarify or better yet, ask for clarification so they can see where the disconnect is emerging to improve communication.
Also, let’s remember that it’s not the laypeople describing the technology in general personified terms like “learning” or “hallucinating”, which furthers some of the grumbling.
Well, I don’t generally expect an academic level of discourse out of image macros found on the Internet.
Usually when I see people talking about it, I do see people making clarifying comments and asking questions like you describe. Sorta like when I described how AI is an umbrella term.
I’m not sure I’d say that learning and hallucinating are personified terms. We see both of those out of any organism complex enough to have something that works like a nervous system, for example.