If in fact most futures play out in ways that lead to human extinction, then a high estimate of extinction is correct or “rational”; if most futures don’t lead to doom, then a low estimate of doom is correct. This is a fact independent of the public / consensus epistemic state of any relevant scientific fields.
This seems wrong, or at least incomplete.
Give all the doom outcomes a p or 1/10^10000000000000000000000 and the bliss outcome 1-p. Even with a lot more ways doom occurs it seems we might not worry much about doom actually happening. It’s true you might weight the value of doom much higher than bliss so some expected value might work towards your view. But now we need to consider the timing of doom and existential risks unrelated to AI. If someone were to work through all the AI dooms and timing of that doom and come to (sake of argument clearly) is 50 billion years then we have much more to worry about from our Sun than AI.
This seems wrong, or at least incomplete.
Give all the doom outcomes a p or 1/10^10000000000000000000000 and the bliss outcome 1-p. Even with a lot more ways doom occurs it seems we might not worry much about doom actually happening. It’s true you might weight the value of doom much higher than bliss so some expected value might work towards your view. But now we need to consider the timing of doom and existential risks unrelated to AI. If someone were to work through all the AI dooms and timing of that doom and come to (sake of argument clearly) is 50 billion years then we have much more to worry about from our Sun than AI.