The headline result: the researchers asked experts for their probabilities that we would get AI that was “able to accomplish every task better and more cheaply than human workers”. The experts thought on average there was a 50% chance of this happening by 2062 – and a 10% chance of it happening by 2026!
But on its own this is a bit misleading. They also asked by what year “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”. The experts thought on average that there was a 50% chance of this happening by 2139, and a 20% chance of it happening by 2037.
As the authors point out, these two questions are basically the same – they were put in just to test if there was any framing effect. The framing effect was apparently strong enough to shift the median date of strong human-level AI from 2062 to 2139. This makes it hard to argue AI experts actually have a strong opinion on this.
These are not the same.
The first question sounds like an AGI—a single AI that can just do anything we tell it to do (or anything it decides to do?) without any farther development effort by humans. We’ll just need to provide a reasonably specified description of the task, and the AI will learn on it’s how to do it by deducing it from the laws of physics or by consuming existing learning resources made for humans or by trial-and-errors or whatever.
The second question does not require AGI—it’s about regular AIs. It requires that for whatever task done by humans, it would be possible to build an AI that does it better and more cheaply. No research into the unknown would need to be done—just utilization of established theory, techniques, and tools—but you would still need humans to develop and build that specific AI.
So, the questions are very different, and different answers to them are expected, but… should’t one expect the latter to happen sooner than the former?
These are not the same.
The first question sounds like an AGI—a single AI that can just do anything we tell it to do (or anything it decides to do?) without any farther development effort by humans. We’ll just need to provide a reasonably specified description of the task, and the AI will learn on it’s how to do it by deducing it from the laws of physics or by consuming existing learning resources made for humans or by trial-and-errors or whatever.
The second question does not require AGI—it’s about regular AIs. It requires that for whatever task done by humans, it would be possible to build an AI that does it better and more cheaply. No research into the unknown would need to be done—just utilization of established theory, techniques, and tools—but you would still need humans to develop and build that specific AI.
So, the questions are very different, and different answers to them are expected, but… should’t one expect the latter to happen sooner than the former?