1. there are dozens of ways AI can prevent a mass extinction event at different stages at its existence.
2. …
If you make a list of 1000 bad things and I make a list of 1000 good things, I have no reason to think that you are somehow better at making lists than prediction markets or expert surveys.
I don’t think both list compensate each other: take, for example, medicine: there are 1000 ways to die and 1000 ways to be cured – but we eventually die.
Dying is a symmetric problem, it’s not like we can’t die without AGI. If you want to calculate p(human extinction | AGI) you have to consider ways AGI can both increase and decrease p(extinction). And the best methods currently available to humans to aggregate low probability statistics are expert surveys, groups of super-forecasters, or prediction markets, all of which agree on pDoom <20%.
I think this cumulative argument works:
1. there are dozens of ways AI can prevent a mass extinction event at different stages at its existence.
2. …
If you make a list of 1000 bad things and I make a list of 1000 good things, I have no reason to think that you are somehow better at making lists than prediction markets or expert surveys.
I don’t think both list compensate each other: take, for example, medicine: there are 1000 ways to die and 1000 ways to be cured – but we eventually die.
Dying is a symmetric problem, it’s not like we can’t die without AGI. If you want to calculate p(human extinction | AGI) you have to consider ways AGI can both increase and decrease p(extinction). And the best methods currently available to humans to aggregate low probability statistics are expert surveys, groups of super-forecasters, or prediction markets, all of which agree on pDoom <20%.