I think this argument can and should be expanded on. Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record. Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?
Take, for example, traditional Marxist thought. In the early twentieth century, an intellectual Marxist’s prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-consistent model which yielded many true predictions and which was refined by decades of rigorous debate and dense works of theory. Most intelligent non-Marxists offering counter-arguments would only have been able to produce some well-known point, maybe one for which the standard rebuttals made up a foundational part of the Marxist model.
So, what went wrong? I doubt there was some fundamental self-contradiction that the Marxists missed in all of their theory-crafting. If you could go back in time and give them a complete history of 20th century economics labelled as a speculative fiction, I don’t think many of their models would update much- so not just a failure to imagine the true outcome. I think it may have been in part a mis-calibration of deductive reasoning.
Reading the old Sherlock Holmes stories recently, I found it kind of funny how irrational the hero could be. He’d make six observations, deduce W, X, and Y, and then rather than saying “I give W, X, and Y each a 70% chance of being true, and if they’re all true then I give Z an 80% chance, therefore the probability of Z is about 27%”, he’d just go “W, X, and Y; therefore Z!”. This seems like a pretty common error.
Inductive reasoning can’t take you very far into the future with something as fast as civilization- the error bars can’t keep up past a year or two. But deductive reasoning promises much more. So long as you carefully ensure that each step is high-probability, the thinking seems to go, a chain of necessary implications can take you as far into the future as you want. Except that, like Holmes, people forget to multiply the probabilities- and a model complex enough to pierce that inductive barrier is likely to have a lot of probabilities.
The AI doom prediction comes from a complex model- one founded on a lot of arguments that seem very likely to be true, but which if false would sink the entire thing. That motivations converge on power-seeking; that super-intelligence could rapidly render human civilization helpless; that a real understanding of the algorithm that spawns AGI wouldn’t offer any clear solutions; that we’re actually close to AGI; etc. If we take our uncertainty about each one of the supporting arguments- small as they may be- seriously, and multiply them together, what does the final uncertainty really look like?
I think this argument can and should be expanded on. Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record. Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?
Take, for example, traditional Marxist thought. In the early twentieth century, an intellectual Marxist’s prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-consistent model which yielded many true predictions and which was refined by decades of rigorous debate and dense works of theory. Most intelligent non-Marxists offering counter-arguments would only have been able to produce some well-known point, maybe one for which the standard rebuttals made up a foundational part of the Marxist model.
So, what went wrong? I doubt there was some fundamental self-contradiction that the Marxists missed in all of their theory-crafting. If you could go back in time and give them a complete history of 20th century economics labelled as a speculative fiction, I don’t think many of their models would update much- so not just a failure to imagine the true outcome. I think it may have been in part a mis-calibration of deductive reasoning.
Reading the old Sherlock Holmes stories recently, I found it kind of funny how irrational the hero could be. He’d make six observations, deduce W, X, and Y, and then rather than saying “I give W, X, and Y each a 70% chance of being true, and if they’re all true then I give Z an 80% chance, therefore the probability of Z is about 27%”, he’d just go “W, X, and Y; therefore Z!”. This seems like a pretty common error.
Inductive reasoning can’t take you very far into the future with something as fast as civilization- the error bars can’t keep up past a year or two. But deductive reasoning promises much more. So long as you carefully ensure that each step is high-probability, the thinking seems to go, a chain of necessary implications can take you as far into the future as you want. Except that, like Holmes, people forget to multiply the probabilities- and a model complex enough to pierce that inductive barrier is likely to have a lot of probabilities.
The AI doom prediction comes from a complex model- one founded on a lot of arguments that seem very likely to be true, but which if false would sink the entire thing. That motivations converge on power-seeking; that super-intelligence could rapidly render human civilization helpless; that a real understanding of the algorithm that spawns AGI wouldn’t offer any clear solutions; that we’re actually close to AGI; etc. If we take our uncertainty about each one of the supporting arguments- small as they may be- seriously, and multiply them together, what does the final uncertainty really look like?