By your theory, if you believe that we are near to the singularity how should we update on the likelihood that we exist at such an incredibly important time?
We can directly observe the current situation that’s already trained into our minds, that’s clearly where we are (since there is no legible preference to tell us otherwise, that we should primarily or at least significantly care about other things instead, which in principle there could be, and so on superintelligent reflection we might develop such claims). Updatelessly we can ask which situations are more likely a priori, to formulate more global commitments (to listen to particular computations) that coordinate across many situations, where the current situation is only one of the possibilities. But the situations are possible worlds, not possible locations/instances of your mind. The same world can have multiple instances of your mind (in practice most importantly because other minds are reasoning about you, but also it’s easy to set up concretely for digital minds), and that world shouldn’t be double-counted for the purposes of deciding what to do, because all these instances within one world will be acting jointly to shape this same world, they won’t be acting to shape multiple worlds, one for each instance.
And so the probabilities of situations are probabilities of the possible worlds that contain your mind, not probabilities of your mind being in a particular place within those worlds. I think the notion of the probability of your mind being in a particular place doesn’t make sense (it’s not straightforwardly a decision relevant thing formulating part of preference data, the way probability of a possible world is), it conflates the uncertainty about a possible world and uncertainty about location within a possible world.
Possibly this originates from the imagery of a possible world being a location in some wider multiverse that contains many possible worlds, similarly to how instances of a mind are located in some wider possible world. But even in a multiverse, multiple instances of a mind (existing across multiple possible worlds) shouldn’t double-count the possible worlds, and so they shouldn’t ask about the probability of being in a particular possible world of the multiverse, instead they should be asking about the probability of that possible world itself (which can be used synonymously, but conceptually there is a subtle difference, and this conflation might be contributing to the temptation to ask about probability of being in a particular situation instead of asking about probability of the possible worlds with that particular situation, even though there doesn’t seem to be a principled reason to consider such a thing).
Trying to break out of simulation is a different game than preventing x-risks in base world, and may have even higher utility if we expect almost inevitable extinction.
By your theory, if you believe that we are near to the singularity how should we update on the likelihood that we exist at such an incredibly important time?
We can directly observe the current situation that’s already trained into our minds, that’s clearly where we are (since there is no legible preference to tell us otherwise, that we should primarily or at least significantly care about other things instead, which in principle there could be, and so on superintelligent reflection we might develop such claims). Updatelessly we can ask which situations are more likely a priori, to formulate more global commitments (to listen to particular computations) that coordinate across many situations, where the current situation is only one of the possibilities. But the situations are possible worlds, not possible locations/instances of your mind. The same world can have multiple instances of your mind (in practice most importantly because other minds are reasoning about you, but also it’s easy to set up concretely for digital minds), and that world shouldn’t be double-counted for the purposes of deciding what to do, because all these instances within one world will be acting jointly to shape this same world, they won’t be acting to shape multiple worlds, one for each instance.
And so the probabilities of situations are probabilities of the possible worlds that contain your mind, not probabilities of your mind being in a particular place within those worlds. I think the notion of the probability of your mind being in a particular place doesn’t make sense (it’s not straightforwardly a decision relevant thing formulating part of preference data, the way probability of a possible world is), it conflates the uncertainty about a possible world and uncertainty about location within a possible world.
Possibly this originates from the imagery of a possible world being a location in some wider multiverse that contains many possible worlds, similarly to how instances of a mind are located in some wider possible world. But even in a multiverse, multiple instances of a mind (existing across multiple possible worlds) shouldn’t double-count the possible worlds, and so they shouldn’t ask about the probability of being in a particular possible world of the multiverse, instead they should be asking about the probability of that possible world itself (which can be used synonymously, but conceptually there is a subtle difference, and this conflation might be contributing to the temptation to ask about probability of being in a particular situation instead of asking about probability of the possible worlds with that particular situation, even though there doesn’t seem to be a principled reason to consider such a thing).
Trying to break out of simulation is a different game than preventing x-risks in base world, and may have even higher utility if we expect almost inevitable extinction.