I think it’s important to be clear about what SIA says in different situations, here. Consider the following 4 questions:
A) Do we live in a simulation?
B) If we live in a simulation, should we expect basement reality to have a large late filter?
C) If we live in basement reality, should we expect basement reality (ie our world) to have a large late filter?
D) If we live in a simulation, should we expect the simulation (ie our world) to have a large late filter?
In this post, you persuasively argue that SIA answers “yes” to (A) and “not necessarily” to (B). However, (B) is almost never decision-relevant, since it’s not about our own world. What about (C) and (D)? (Which are easier to see how they could be decision-relevant, for someone who buys SIA. I personally agree with you that something like Anthropic Decision Theory is the best way to reason about decisions, but responsible usage of SIA+CDT is one way to get there, in anthropic dilemmas.)
To answer (C): If we condition on living in basement reality, then SIA favors hypotheses that imply many observers in basement reality. The simulated copies are entirely irrelevant, since we have conditioned them away. (You can verify this with bayes theorem.) So we are back with the SIA doomsday argument again, and we face large late filters.
To answer (D): Detailed simulations of civilisations that spread to the stars are vastly more expensive than detailed simulations of early civilizations. This means that the latter are likely to be far more common, and we’re almost certainly living in a simulation we’re we’ll never spread to the (simulated) stars. (This is plausibly because the simulation will be turned off before we get the chance.) You could discuss what terminology to use for this, but I’d be inclined to call this a large late filter, too.
So my preferred framing isn’t really that the simulation hypothesis “undercuts” the SIA doomsday argument. It’s rather that the simulation hypothesis provides one plausible mechanism for it: that we’re in a simulation that will end soon. But that’s just a question of framing/terminology. The main point of this comment is to provide answers to questions (C) and (D).
I disagree that (B) is not decision-relevant and that (C) is. I’m not sure, haven’t thought through all this yet, but that’s my initial reaction at least.
Ha, I wrote a comment like yours but slightly worse, then refreshed and your comment appeared. So now I’ll just add one small note:
To the extent that (1) normatively, we care much more about the rest of the universe than our personal lives/futures, and (2) empirically, we believe that our choices are much more consequential if we are non-simulated than if we are simulated, we should in practice act as if there are greater odds that we are non-simulated than we have reason to believe for purely epistemic purposes. So in practice, I’m particularly interested in (C) (and I tentatively buy SIA doomsday as explained by Katja Grace).
Edit: also, isn’t the last part of this sentence from the post wrong:
SIA therefore advises not that the Great Filter is ahead, but rather that we are in a simulation run by an intergalactic human civilization, without strong views on late filters for unsimulated reality.
Re your edit: That bit seems roughly correct to me.
If we are in a simulation, SIA doesn’t have strong views on late filters for unsimulated reality. (This is my question (B) above.) And since SIA thinks we’re almost certainly in a simulation, it’s not crazy to say that SIA doesn’t have strong view on late filters for unsimulated reality. SIA is very ok with small late filters, as long as we live in a simulation, which SIA says we probably do.
But yeah, it is a little bit confusing, in that we care more about late-filters-in-unsimulated reality if we live in unsimulated reality. And in the (unlikely) case that we do, then we should ask my question (C) above, in which case SIA do have strong views on late filters.
Ah, I agree. I misread that bit as about filters for us given that we are non-simulated, but really it’s about filters for non-simulated civilizations, which under the simulation argument our existence doesn’t tell us much about. Thanks.
Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).
If 99.99 per cent of all civilizations go extinct, SIA Doomsday is true in basement reality, despite the fact that last 0.01 civilizations create trillions of simulations, and we are likely to be in one of them.
I think it’s important to be clear about what SIA says in different situations, here. Consider the following 4 questions:
A) Do we live in a simulation?
B) If we live in a simulation, should we expect basement reality to have a large late filter?
C) If we live in basement reality, should we expect basement reality (ie our world) to have a large late filter?
D) If we live in a simulation, should we expect the simulation (ie our world) to have a large late filter?
In this post, you persuasively argue that SIA answers “yes” to (A) and “not necessarily” to (B). However, (B) is almost never decision-relevant, since it’s not about our own world. What about (C) and (D)? (Which are easier to see how they could be decision-relevant, for someone who buys SIA. I personally agree with you that something like Anthropic Decision Theory is the best way to reason about decisions, but responsible usage of SIA+CDT is one way to get there, in anthropic dilemmas.)
To answer (C): If we condition on living in basement reality, then SIA favors hypotheses that imply many observers in basement reality. The simulated copies are entirely irrelevant, since we have conditioned them away. (You can verify this with bayes theorem.) So we are back with the SIA doomsday argument again, and we face large late filters.
To answer (D): Detailed simulations of civilisations that spread to the stars are vastly more expensive than detailed simulations of early civilizations. This means that the latter are likely to be far more common, and we’re almost certainly living in a simulation we’re we’ll never spread to the (simulated) stars. (This is plausibly because the simulation will be turned off before we get the chance.) You could discuss what terminology to use for this, but I’d be inclined to call this a large late filter, too.
So my preferred framing isn’t really that the simulation hypothesis “undercuts” the SIA doomsday argument. It’s rather that the simulation hypothesis provides one plausible mechanism for it: that we’re in a simulation that will end soon. But that’s just a question of framing/terminology. The main point of this comment is to provide answers to questions (C) and (D).
I disagree that (B) is not decision-relevant and that (C) is. I’m not sure, haven’t thought through all this yet, but that’s my initial reaction at least.
Ha, I wrote a comment like yours but slightly worse, then refreshed and your comment appeared. So now I’ll just add one small note:
To the extent that (1) normatively, we care much more about the rest of the universe than our personal lives/futures, and (2) empirically, we believe that our choices are much more consequential if we are non-simulated than if we are simulated, we should in practice act as if there are greater odds that we are non-simulated than we have reason to believe for purely epistemic purposes. So in practice, I’m particularly interested in (C) (and I tentatively buy SIA doomsday as explained by Katja Grace).
Edit: also, isn’t the last part of this sentence from the post wrong:
Re your edit: That bit seems roughly correct to me.
If we are in a simulation, SIA doesn’t have strong views on late filters for unsimulated reality. (This is my question (B) above.) And since SIA thinks we’re almost certainly in a simulation, it’s not crazy to say that SIA doesn’t have strong view on late filters for unsimulated reality. SIA is very ok with small late filters, as long as we live in a simulation, which SIA says we probably do.
But yeah, it is a little bit confusing, in that we care more about late-filters-in-unsimulated reality if we live in unsimulated reality. And in the (unlikely) case that we do, then we should ask my question (C) above, in which case SIA do have strong views on late filters.
Ah, I agree. I misread that bit as about filters for us given that we are non-simulated, but really it’s about filters for non-simulated civilizations, which under the simulation argument our existence doesn’t tell us much about. Thanks.
Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).
If 99.99 per cent of all civilizations go extinct, SIA Doomsday is true in basement reality, despite the fact that last 0.01 civilizations create trillions of simulations, and we are likely to be in one of them.