I don’t think it’s clear that knowing we’re in a simulation “destroys” the simulation. This assumes that belief by the occupants of the simulation that they are being simulated creates an invalidating difference from the desired reference class of plausible pre-singularity civilizations, but I don’t think that’s true:
Actual, unsimulated, pre-singularity civilizations are in similar epistemic positions to us and thus many of their influential occupants may wrongly but rationally believe they are simulated, which may affect the trajectory of the development of their ASI. So knowing the effects of simulation beliefs is important for modeling actual ASIs.
This is true only if we assume that a base reality for our civilization exists at all. But knowing that we are in a simulation shifts the main utility of our existence, which Nesov wrote about above.
For example, if in some simulation we can break out, this would be a more important event than what is happening in the base reality where we likely go extinct anyway.
And as the proportion of simulations is very large, even a small chance to break away from inside a simulation, perhaps via negotiation with its owners, has more utility than focusing on base reality.
I don’t think it’s clear that knowing we’re in a simulation “destroys” the simulation. This assumes that belief by the occupants of the simulation that they are being simulated creates an invalidating difference from the desired reference class of plausible pre-singularity civilizations, but I don’t think that’s true:
Actual, unsimulated, pre-singularity civilizations are in similar epistemic positions to us and thus many of their influential occupants may wrongly but rationally believe they are simulated, which may affect the trajectory of the development of their ASI. So knowing the effects of simulation beliefs is important for modeling actual ASIs.
This is true only if we assume that a base reality for our civilization exists at all. But knowing that we are in a simulation shifts the main utility of our existence, which Nesov wrote about above.
For example, if in some simulation we can break out, this would be a more important event than what is happening in the base reality where we likely go extinct anyway.
And as the proportion of simulations is very large, even a small chance to break away from inside a simulation, perhaps via negotiation with its owners, has more utility than focusing on base reality.
This post by EY is about breaking out of a simulationhttps://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message