Here’s a quite general argument about why we shouldn’t expect too many threshold functions in the impact of AI: because at any point, humans will be filling in the gaps of whatever AIs can’t do.
I’m not sure that I understand this.
I think the idea is that if there are100 tasks that are required for science, or industry, or something else important, and AI can only do 5 of those tasks, then the humans will do the other 95. And then a bit later, AI can do 15 of those tasks and the humans are doing the other 85. As more and more of the work is being done by AI, things are speeding up (because AIs think faster than humans), but there’s no abrupt handoff where “AIs start doing the work instead of humans.” So overall, we see a smooth economic takeoff.
I’m not sure that I understand this.
I think the idea is that if there are100 tasks that are required for science, or industry, or something else important, and AI can only do 5 of those tasks, then the humans will do the other 95. And then a bit later, AI can do 15 of those tasks and the humans are doing the other 85. As more and more of the work is being done by AI, things are speeding up (because AIs think faster than humans), but there’s no abrupt handoff where “AIs start doing the work instead of humans.” So overall, we see a smooth economic takeoff.