I suspect that one significant source of underestimating AI impact is that a lot of people had no good “baseline” of machine capabilities in the first place.
If you’re in IT, or as much as taken a CS 101 course, then you’ve been told over and over again: computers have NO common sense. Computers DO NOT understand informal language. Their capability profile is completely inhuman: they live in a world where factoring a 20-digit integer is pretty easy but telling whether there is a cat or a dog in a photo is pretty damn hard. This is something you have to learn, remember, understand and internalize to be able to use computers effectively.
And if this was your baseline: it’s obvious that current AI capabilities represent a major advancement.
But people in IT came up with loads and loads of clever tricks to make computers usable by common people—to conceal the inhuman nature of machines, to use their strengths to compensate for their weaknesses. Normal people look at ChatGPT and say: “isn’t this just a slightly better Google” or “isn’t that just Siri but better”. Without having any concept of the mountain of research and engineering and clever hacks that went into dancing around the limitations of poor NLP and NLU to get web search to work as well as it did in year 1999, or how hard it was to get Siri to work even as well as it did in an age before GPT-2.
In a way, for a normal person, ChatGPT just brings the capabilities of machines closer to what they already expect machines to be capable of. There’s no jump. The shift from “I think machines can do X, even though they can’t do X at all, and it’s actually just Y with some clever tricks, which looks like X if you don’t look too hard” to “I think machines can do X, and they actually can do X” is hard to perceive.
And if a person knew barely anything about IT, just enough to be dangerous? Then ChatGPT may instead pattern match to the same tricks as what we typically use to imitate those unnatural-for-machines capabilities. “It can’t really think, it just uses statistics and smokes and mirrors to make it look like it thinks.”
To a normal person, Sora was way more impressive than o3.
I suspect that one significant source of underestimating AI impact is that a lot of people had no good “baseline” of machine capabilities in the first place.
If you’re in IT, or as much as taken a CS 101 course, then you’ve been told over and over again: computers have NO common sense. Computers DO NOT understand informal language. Their capability profile is completely inhuman: they live in a world where factoring a 20-digit integer is pretty easy but telling whether there is a cat or a dog in a photo is pretty damn hard. This is something you have to learn, remember, understand and internalize to be able to use computers effectively.
And if this was your baseline: it’s obvious that current AI capabilities represent a major advancement.
But people in IT came up with loads and loads of clever tricks to make computers usable by common people—to conceal the inhuman nature of machines, to use their strengths to compensate for their weaknesses. Normal people look at ChatGPT and say: “isn’t this just a slightly better Google” or “isn’t that just Siri but better”. Without having any concept of the mountain of research and engineering and clever hacks that went into dancing around the limitations of poor NLP and NLU to get web search to work as well as it did in year 1999, or how hard it was to get Siri to work even as well as it did in an age before GPT-2.
In a way, for a normal person, ChatGPT just brings the capabilities of machines closer to what they already expect machines to be capable of. There’s no jump. The shift from “I think machines can do X, even though they can’t do X at all, and it’s actually just Y with some clever tricks, which looks like X if you don’t look too hard” to “I think machines can do X, and they actually can do X” is hard to perceive.
And if a person knew barely anything about IT, just enough to be dangerous? Then ChatGPT may instead pattern match to the same tricks as what we typically use to imitate those unnatural-for-machines capabilities. “It can’t really think, it just uses statistics and smokes and mirrors to make it look like it thinks.”
To a normal person, Sora was way more impressive than o3.