Upcoming AGI x-risk upweights the simulation hypothesis for me because...
Of all the peoples’ lives that exist and have existed, what are the chances I’m living one of the most prosperous lives in all of humanity, only to descend into facing the upcoming rapture of the entire world? Sounds like a video game / choose your adventure from another life...
Interestingly, J. Miller recently wrote in twitter: if a person gives a higher weight to AI risk, she should also give higher credence to simulation hypothesis, as this person believes in high chance of appearance of superintelligence capable to simulation creation.
There are many reasons future AI might choose to do this
Yeah, but almost all of them are because we taught them well. Sure, curiosity might push them to do it, but not with any significant amount of compute power.
Even un-aligned AI will create past simulations in order to predict the probability of different types of AI appearance and thus predict which types of alien AIs it may meet in space or acausaly trade in multiverse.
I don’t see the probability-estimation causality here—I don’t understand your priors if you’re updating this way. If we’re in a simulation, the fact that we’re making some progress on AI-like modeling doesn’t seem to DEPEND on being in that simulation. If we’re on the “outside”, and are actually in a “natural” universe, this kind of transformer doesn’t seem to provide any evidence on whether we can create full-fidelity simulations in the future.
The simulation hypothesis DEPENDS on the simulation being self-contained enough that there are no in-universe tests which can prove or disprove it, AND on being detailed enough to contain agents of sufficient complexity to wonder whether it’s a simulation. Neither of those requirements are informed by current technological advances or measurements.
Note: I currently think of the simulation hypothesis as similar to MWI in quantum mechanics—it’s a model that cannot be proven or disproven, and has zero impact on predicting future experiences of humans (or other in-universe intelligences).
Upcoming AGI x-risk upweights the simulation hypothesis for me because...
Of all the peoples’ lives that exist and have existed, what are the chances I’m living one of the most prosperous lives in all of humanity, only to descend into facing the upcoming rapture of the entire world? Sounds like a video game / choose your adventure from another life...
Is there a more charitable interpretation of this line of thinking rather than “My soul selected this particular body out of all available”?
You being you as you are is a product of your body developing in circumstances it happened to develop in.
Interestingly, J. Miller recently wrote in twitter: if a person gives a higher weight to AI risk, she should also give higher credence to simulation hypothesis, as this person believes in high chance of appearance of superintelligence capable to simulation creation.
Yeah, but almost all of them are because we taught them well. Sure, curiosity might push them to do it, but not with any significant amount of compute power.
Even un-aligned AI will create past simulations in order to predict the probability of different types of AI appearance and thus predict which types of alien AIs it may meet in space or acausaly trade in multiverse.
I don’t see the probability-estimation causality here—I don’t understand your priors if you’re updating this way. If we’re in a simulation, the fact that we’re making some progress on AI-like modeling doesn’t seem to DEPEND on being in that simulation. If we’re on the “outside”, and are actually in a “natural” universe, this kind of transformer doesn’t seem to provide any evidence on whether we can create full-fidelity simulations in the future.
The simulation hypothesis DEPENDS on the simulation being self-contained enough that there are no in-universe tests which can prove or disprove it, AND on being detailed enough to contain agents of sufficient complexity to wonder whether it’s a simulation. Neither of those requirements are informed by current technological advances or measurements.
Note: I currently think of the simulation hypothesis as similar to MWI in quantum mechanics—it’s a model that cannot be proven or disproven, and has zero impact on predicting future experiences of humans (or other in-universe intelligences).