I’m not super clear on what you’re asking, but some general thoughts I had on reading your question: There’s almost always a future in which some scenario comes to pass. There’s probably some Everett branch of the future where the sun doesn’t rise anymore. So in isolation, being able to see a specifically sampled future where some scenario plays out shouldn’t be evidence one way or the other for anything that we don’t consider impossible (if there were a future where the fundamental laws of physics were definitely being violated—like if global entropy kept decreasing—that would be a different story).
A different context is where we see a future randomly sampled across all possible futures. If that future then shows a world where we’re alive and our CEV is achieved, then that would probably be good evidence for updating downward on AGI risk if you have a high P(doom). Someone with a P(doom) of 10% wouldn’t expect to see a future like that with less than 90% certainty (barring other outcomes, but I’m abstracting them away for simplicity, and it shouldn’t affect the central point), so that being the randomly sampled future should only update their estimates by a little (someone else can probably give the exact numbers here), and it’d still be really worth it to work to prevent 10% chance of doom. If you had P(doom) of >50% on the other hand, then this would be stronger evidence to update your priors on, to a degree given by the same equation as if you had P(doom).
If, on the other hand, you were talking about a deterministic future (ignoring quantum considerations, just stuff happening at a macro level), and we could know with some certainty that that future was good—then I’d still ask whether that’s conditional on our working currently to prevent other futures, or whether it was the default outcome. If the former, that means there probably still a strong case for why AGI is dangerous, but we were up to the task of solving it. Concerns about nuclear weapons don’t go away just because safety protocols are sufficient—they probably decrease our practical worries, but the intrinsic concern about their default destructiveness would, and probably should, remain. If it’s the latter, on the other hand, then yeah I agree that we should update down hard on AGI risk. But speculating that way doesn’t seem more useful than as a different framing of priors.
I’m not super clear on what you’re asking, but some general thoughts I had on reading your question: There’s almost always a future in which some scenario comes to pass. There’s probably some Everett branch of the future where the sun doesn’t rise anymore. So in isolation, being able to see a specifically sampled future where some scenario plays out shouldn’t be evidence one way or the other for anything that we don’t consider impossible (if there were a future where the fundamental laws of physics were definitely being violated—like if global entropy kept decreasing—that would be a different story).
A different context is where we see a future randomly sampled across all possible futures. If that future then shows a world where we’re alive and our CEV is achieved, then that would probably be good evidence for updating downward on AGI risk if you have a high P(doom). Someone with a P(doom) of 10% wouldn’t expect to see a future like that with less than 90% certainty (barring other outcomes, but I’m abstracting them away for simplicity, and it shouldn’t affect the central point), so that being the randomly sampled future should only update their estimates by a little (someone else can probably give the exact numbers here), and it’d still be really worth it to work to prevent 10% chance of doom. If you had P(doom) of >50% on the other hand, then this would be stronger evidence to update your priors on, to a degree given by the same equation as if you had P(doom).
If, on the other hand, you were talking about a deterministic future (ignoring quantum considerations, just stuff happening at a macro level), and we could know with some certainty that that future was good—then I’d still ask whether that’s conditional on our working currently to prevent other futures, or whether it was the default outcome. If the former, that means there probably still a strong case for why AGI is dangerous, but we were up to the task of solving it. Concerns about nuclear weapons don’t go away just because safety protocols are sufficient—they probably decrease our practical worries, but the intrinsic concern about their default destructiveness would, and probably should, remain. If it’s the latter, on the other hand, then yeah I agree that we should update down hard on AGI risk. But speculating that way doesn’t seem more useful than as a different framing of priors.