I think “rationally” in the title and “very very very sure” suggest you’re looking at this question in slightly the wrong way.
If in fact most futures play out in ways that lead to human extinction, then a high estimate of extinction is correct or “rational”; if most futures don’t lead to doom, then a low estimate of doom is correct. This is a fact independent of the public / consensus epistemic state of any relevant scientific fields.
A recent quote from Eliezer on this topic (context / source in the footnotes of this post):
My epistemology is such that it’s possible in principle for me to notice that I’m doomed, in worlds which look very doomed, despite the fact that all such possible worlds no matter how doomed they actually are, always contain a chorus of people claiming we’re not doomed.
Eliezer also talked a bit about how uncertainty over the right distribution can lead to high probabilities of doom towards the end of the Lunar Society podcast.
This is kind of a strawman / oversimplification of the idea, but: if you’re maximally uncertain about the future, you expect with near certainty that the atoms in the solar system end up in a random configuration. Most possible configurations of atoms have no value to humans, so being very uncertain about something and then applying valid deductive reasoning to that uncertainty can lead to arbitrarily high estimates of doom. Of course, this uncertainty is in the map and not the territory; your original uncertainty may be unjustified or incorrect. But the point is, it doesn’t really have anything to do with the epistemic state of a particular scientific field.
Also, I don’t know of anyone informed who is “very very very sure” of any non-conditional far future predictions. My own overall p(doom) is a “fuzzy” 90%+. There are some conditional probabilities which I would estimate as much closer to 1 or 0 given the right framing, but ~10% uncertainty in my overall model seems like the right amount of epistemic humility / off-model uncertainty / etc. (I’d say 10% seems about equally likely to be too much humility vs. too little, actually.)
If in fact most futures play out in ways that lead to human extinction, then a high estimate of extinction is correct or “rational”; if most futures don’t lead to doom, then a low estimate of doom is correct. This is a fact independent of the public / consensus epistemic state of any relevant scientific fields.
This seems wrong, or at least incomplete.
Give all the doom outcomes a p or 1/10^10000000000000000000000 and the bliss outcome 1-p. Even with a lot more ways doom occurs it seems we might not worry much about doom actually happening. It’s true you might weight the value of doom much higher than bliss so some expected value might work towards your view. But now we need to consider the timing of doom and existential risks unrelated to AI. If someone were to work through all the AI dooms and timing of that doom and come to (sake of argument clearly) is 50 billion years then we have much more to worry about from our Sun than AI.
I think “rationally” in the title and “very very very sure” suggest you’re looking at this question in slightly the wrong way.
If in fact most futures play out in ways that lead to human extinction, then a high estimate of extinction is correct or “rational”; if most futures don’t lead to doom, then a low estimate of doom is correct. This is a fact independent of the public / consensus epistemic state of any relevant scientific fields.
A recent quote from Eliezer on this topic (context / source in the footnotes of this post):
Eliezer also talked a bit about how uncertainty over the right distribution can lead to high probabilities of doom towards the end of the Lunar Society podcast.
This is kind of a strawman / oversimplification of the idea, but: if you’re maximally uncertain about the future, you expect with near certainty that the atoms in the solar system end up in a random configuration. Most possible configurations of atoms have no value to humans, so being very uncertain about something and then applying valid deductive reasoning to that uncertainty can lead to arbitrarily high estimates of doom. Of course, this uncertainty is in the map and not the territory; your original uncertainty may be unjustified or incorrect. But the point is, it doesn’t really have anything to do with the epistemic state of a particular scientific field.
Also, I don’t know of anyone informed who is “very very very sure” of any non-conditional far future predictions. My own overall p(doom) is a “fuzzy” 90%+. There are some conditional probabilities which I would estimate as much closer to 1 or 0 given the right framing, but ~10% uncertainty in my overall model seems like the right amount of epistemic humility / off-model uncertainty / etc. (I’d say 10% seems about equally likely to be too much humility vs. too little, actually.)
This seems wrong, or at least incomplete.
Give all the doom outcomes a p or 1/10^10000000000000000000000 and the bliss outcome 1-p. Even with a lot more ways doom occurs it seems we might not worry much about doom actually happening. It’s true you might weight the value of doom much higher than bliss so some expected value might work towards your view. But now we need to consider the timing of doom and existential risks unrelated to AI. If someone were to work through all the AI dooms and timing of that doom and come to (sake of argument clearly) is 50 billion years then we have much more to worry about from our Sun than AI.