In my posts, I’ve argued that indexical uncertainty like this shouldn’t be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy. [Emphasis added]
In the subsequent 15 years, I’ve upweighted influencing the multiverse through my copies in simulations, relative to base universes, to where they’re about equally important in my mind. For 4 reasons:
Realizing that there are probably universes with vastly greater computational resources than ours, implying there are more simulations containing me than I had thought.
Realizing that there may well be many possible motivations for running simulations, besides interest in ancestors, which was the main motivation talked about back in 2010. This includes acausal trade, curiosity about aliens, science on the distribution of superintelligences in the multiverse, gathering philosophical ideas, entertainment, altruism.
Being fortunate in various ways, on a personal level, that seem more likely if I’m in a simulation.
Doing badly on a civilization level, in a way that devalues trying to influence base reality. I.e, relative to 2010, it looks more likely we’ll undergo the AI transition blindly, without key decision makers paying attention to important philosophical and strategic ideas.
Concretely, this mostly cashes out to me thinking and writing with potential simulators in mind as an additional audience, hoping my ideas might benefit or interest some of them even if they end up largely ignored in this reality.
Realizing that there are probably universes with vastly greater computational resources than ours, implying there are more simulations containing me than I had thought.
What made you believe that?
I find it hard to even conceptualize how to think through something like that, including the anthropics, which computationally powerful universes to admit, &c.
My intuition is that allowing universes with hypercomputation puts us in a dovetailer being run almost surely somewhere in the most computationally powerful universes, but that this all introduces a ton of difficulties into reasoning about the multiverse and our position inside of it.
An update on this 2010 position of mine, which seems to have become conventional wisdom on LW:
In the subsequent 15 years, I’ve upweighted influencing the multiverse through my copies in simulations, relative to base universes, to where they’re about equally important in my mind. For 4 reasons:
Realizing that there are probably universes with vastly greater computational resources than ours, implying there are more simulations containing me than I had thought.
Realizing that there may well be many possible motivations for running simulations, besides interest in ancestors, which was the main motivation talked about back in 2010. This includes acausal trade, curiosity about aliens, science on the distribution of superintelligences in the multiverse, gathering philosophical ideas, entertainment, altruism.
Being fortunate in various ways, on a personal level, that seem more likely if I’m in a simulation.
Doing badly on a civilization level, in a way that devalues trying to influence base reality. I.e, relative to 2010, it looks more likely we’ll undergo the AI transition blindly, without key decision makers paying attention to important philosophical and strategic ideas.
Concretely, this mostly cashes out to me thinking and writing with potential simulators in mind as an additional audience, hoping my ideas might benefit or interest some of them even if they end up largely ignored in this reality.
What made you believe that?
I find it hard to even conceptualize how to think through something like that, including the anthropics, which computationally powerful universes to admit, &c.
My intuition is that allowing universes with hypercomputation puts us in a dovetailer being run almost surely somewhere in the most computationally powerful universes, but that this all introduces a ton of difficulties into reasoning about the multiverse and our position inside of it.
Yeah, my intuition is similar to yours, and it seems very difficult to reason about all of this. That just represents my best guess.