I think your position can be oversimplified as follows: ‘Being in a simulation’ makes sense only if it has practical, observable differences. But as most simulations closely match the base world, there are no observable differences. So the claim has no meaning.
However, in our case, this isn’t true. The fact that we know we are in a simulation ‘destroys’ the simulation, and thus its owners may turn it off or delete those who come too close to discovering they are in a simulation. If I care about the sudden non-existence of my instance, this can be a problem.
Moreover, if the alien simulation idea is valid, they are simulating possible or even hypothetical worlds, so there are no copies of me in base reality, as there is no relevant base reality (excluding infinite multiverse scenarios here).
Also, being in an AI-testing simulation has observable consequences for me: I am more likely to observe strange variations of world history or play a role in the success or failure of AI alignment efforts.
If I know that I am simulated for some purpose, the only thing that matters is what conclusions I prefer the simulation owners will make. But it is not clear to me now, in the case of an alien simulation, what I should want.
One more consideration is what I call meta-simulation: a simulation in which the owners are testing the ability of simulated minds to guess that they are in a simulation and hack it from inside.
TLDR: If I know that I am in simulation, simulation+owners is my base reality that matters.
I don’t think it’s clear that knowing we’re in a simulation “destroys” the simulation. This assumes that belief by the occupants of the simulation that they are being simulated creates an invalidating difference from the desired reference class of plausible pre-singularity civilizations, but I don’t think that’s true:
Actual, unsimulated, pre-singularity civilizations are in similar epistemic positions to us and thus many of their influential occupants may wrongly but rationally believe they are simulated, which may affect the trajectory of the development of their ASI. So knowing the effects of simulation beliefs is important for modeling actual ASIs.
This is true only if we assume that a base reality for our civilization exists at all. But knowing that we are in a simulation shifts the main utility of our existence, which Nesov wrote about above.
For example, if in some simulation we can break out, this would be a more important event than what is happening in the base reality where we likely go extinct anyway.
And as the proportion of simulations is very large, even a small chance to break away from inside a simulation, perhaps via negotiation with its owners, has more utility than focusing on base reality.
I think your position can be oversimplified as follows: ‘Being in a simulation’ makes sense only if it has practical, observable differences. But as most simulations closely match the base world, there are no observable differences. So the claim has no meaning.
However, in our case, this isn’t true. The fact that we know we are in a simulation ‘destroys’ the simulation, and thus its owners may turn it off or delete those who come too close to discovering they are in a simulation. If I care about the sudden non-existence of my instance, this can be a problem.
Moreover, if the alien simulation idea is valid, they are simulating possible or even hypothetical worlds, so there are no copies of me in base reality, as there is no relevant base reality (excluding infinite multiverse scenarios here).
Also, being in an AI-testing simulation has observable consequences for me: I am more likely to observe strange variations of world history or play a role in the success or failure of AI alignment efforts.
If I know that I am simulated for some purpose, the only thing that matters is what conclusions I prefer the simulation owners will make. But it is not clear to me now, in the case of an alien simulation, what I should want.
One more consideration is what I call meta-simulation: a simulation in which the owners are testing the ability of simulated minds to guess that they are in a simulation and hack it from inside.
TLDR: If I know that I am in simulation, simulation+owners is my base reality that matters.
I don’t think it’s clear that knowing we’re in a simulation “destroys” the simulation. This assumes that belief by the occupants of the simulation that they are being simulated creates an invalidating difference from the desired reference class of plausible pre-singularity civilizations, but I don’t think that’s true:
Actual, unsimulated, pre-singularity civilizations are in similar epistemic positions to us and thus many of their influential occupants may wrongly but rationally believe they are simulated, which may affect the trajectory of the development of their ASI. So knowing the effects of simulation beliefs is important for modeling actual ASIs.
This is true only if we assume that a base reality for our civilization exists at all. But knowing that we are in a simulation shifts the main utility of our existence, which Nesov wrote about above.
For example, if in some simulation we can break out, this would be a more important event than what is happening in the base reality where we likely go extinct anyway.
And as the proportion of simulations is very large, even a small chance to break away from inside a simulation, perhaps via negotiation with its owners, has more utility than focusing on base reality.
This post by EY is about breaking out of a simulationhttps://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message