1. there are dozens of ways AI can prevent a mass extinction event at different stages at its existence.
2. …
If you make a list of 1000 bad things and I make a list of 1000 good things, I have no reason to think that you are somehow better at making lists than prediction markets or expert surveys.
I don’t think both list compensate each other: take, for example, medicine: there are 1000 ways to die and 1000 ways to be cured – but we eventually die.
Dying is a symmetric problem, it’s not like we can’t die without AGI. If you want to calculate p(human extinction | AGI) you have to consider ways AGI can both increase and decrease p(extinction). And the best methods currently available to humans to aggregate low probability statistics are expert surveys, groups of super-forecasters, or prediction markets, all of which agree on pDoom <20%.
I think the cumulative argument works:
There are dozens of independent ways in which AI can cause a mass extinction event at different stages of its existence.
While each may have around a 10 percent chance a priori, cumulatively there is more than a 99 percent chance that at least one bad thing will happen.
This doesn’t mean that all humans will go extinct for sure, everywhere and forever. Some may survive the mass extinction event.
I think this cumulative argument works:
1. there are dozens of ways AI can prevent a mass extinction event at different stages at its existence.
2. …
If you make a list of 1000 bad things and I make a list of 1000 good things, I have no reason to think that you are somehow better at making lists than prediction markets or expert surveys.
I don’t think both list compensate each other: take, for example, medicine: there are 1000 ways to die and 1000 ways to be cured – but we eventually die.
Dying is a symmetric problem, it’s not like we can’t die without AGI. If you want to calculate p(human extinction | AGI) you have to consider ways AGI can both increase and decrease p(extinction). And the best methods currently available to humans to aggregate low probability statistics are expert surveys, groups of super-forecasters, or prediction markets, all of which agree on pDoom <20%.