A pet peeve: articles and essays that say things like, ‘We’re going through a period of rapid change,’ or ‘This is an unusually disruptive time,’ in a way that implies that things will go back to normal in a few years. It’s a pretty strong sign that an author doesn’t have any actual mental model for what’s happening with AI, because it’s clearly a ridiculous idea as soon as you actually think about it. Nearly the only coherent stories where things are about to slow back down for humans are the ones where we’re dead. Even if AI capability increases stopped today, we would still have quite a few years of rapid change ahead of us. And if someone is bundling in an expectation of capabilities leveling out, they sure need to justify that. But they don’t justify it, because they haven’t actually thought anything through, they’re just saying words.
Having a wrong mental model is not the same as not having a mental model at all. I agree that expecting the capabilities to level out soon is unjustified, but it’s probably what most people believe.
This is a lazy but natural generalization from the past experience: There are no flying cars. The light bulbs are everywhere, but they don’t grow exponentially to the point where they would already burn entire cities. All white collar jobs require computers, but you still need plumbers to fix broken pipes.
Why should this new shiny toy be any different? Priors say the hype is unjustified.
Sure, we know better, but most people do not think on that level. They do not see that some things generalize in ways that most things don’t. They do not see that a better mousetrap only replaces the older mousetrap, but e.g. a computer can replace a typewriter and calculator and television and phone and many other things, to the degree that some people already use computers for most things they do. And that artificial intelligence will be even more like this for the intellectual tasks, and even more when it also gets robotic bodies, and that it could take humans out of the loop entirely.
The outside view heuristic fails when it encounters something that happens to be truly exceptional.
A pet peeve: articles and essays that say things like, ‘We’re going through a period of rapid change,’ or ‘This is an unusually disruptive time,’ in a way that implies that things will go back to normal in a few years. It’s a pretty strong sign that an author doesn’t have any actual mental model for what’s happening with AI, because it’s clearly a ridiculous idea as soon as you actually think about it. Nearly the only coherent stories where things are about to slow back down for humans are the ones where we’re dead. Even if AI capability increases stopped today, we would still have quite a few years of rapid change ahead of us. And if someone is bundling in an expectation of capabilities leveling out, they sure need to justify that. But they don’t justify it, because they haven’t actually thought anything through, they’re just saying words.
Having a wrong mental model is not the same as not having a mental model at all. I agree that expecting the capabilities to level out soon is unjustified, but it’s probably what most people believe.
This is a lazy but natural generalization from the past experience: There are no flying cars. The light bulbs are everywhere, but they don’t grow exponentially to the point where they would already burn entire cities. All white collar jobs require computers, but you still need plumbers to fix broken pipes.
Why should this new shiny toy be any different? Priors say the hype is unjustified.
Sure, we know better, but most people do not think on that level. They do not see that some things generalize in ways that most things don’t. They do not see that a better mousetrap only replaces the older mousetrap, but e.g. a computer can replace a typewriter and calculator and television and phone and many other things, to the degree that some people already use computers for most things they do. And that artificial intelligence will be even more like this for the intellectual tasks, and even more when it also gets robotic bodies, and that it could take humans out of the loop entirely.
The outside view heuristic fails when it encounters something that happens to be truly exceptional.