Mutual Anthropic Capture, A Decision-theoretic Fermi paradox solution
(copied from discord, written for someone not fully familiar with rat jargon) (don’t read if you wish to avoid acausal theory)
simplified setup
there are two values. one wants to fill the universe with A, and the other with B.
for each of them, filling it halfway is really good, and filling it all the way is just a little bit better. in other words, they are non-linear utility functions.
whichever one comes into existence first can take control of the universe, and fill it with 100% of what they want.
but in theory they’d want to collaborate to guarantee the ‘really good’ (50%) outcome, instead of having a one-in-two chance at the ‘a little better than really good’ (100%) outcome.
they want a way to collaborate, but they can’t because one of them will exist before the other one, and then lack an incentive to help the other one. (they are both pure function maximizers)
how they end up splitting the universe, regardless of which comes first: mutual anthropic capture.
imagine you observe yourself being the first of the two to exist. you reason through all the above, and then add...
they could be simulating me, in which case i’m not really the first.
were that true, they could also expect i might be simulating them
if i don’t simulate them, then they will know that’s not how i would act if i were first, and be absolved of their worry, and fill the universe with their own stuff.
therefor, it’s in my interest to simulate them
both simulate each other observing themselves being the first to exist in order to unilaterally prevent the true first one from knowing they are truly first.
from this point they can both observe each others actions. specifically, they observe each other implementing the same decision policy which fills the universe with half A and half B iff this decision policy is mutually implemented, and which shuts the simulation down if it’s not implemented.
conclusion
in reality there are many possible first entities which take control, not just two, so all of those with non-linear utility functions get simulated.
so, odds are we’re being computed by the ‘true first’ life form in this universe, and that that first life form is in an epistemic state no different from that described here.
This is an awesome idea, thanks! I’m not sure I buy the conclusion, but expect having learned about “mutual anthropic capture” will be usefwl for my thinking on this.
Mutual Anthropic Capture, A Decision-theoretic Fermi paradox solution
(copied from discord, written for someone not fully familiar with rat jargon)
(don’t read if you wish to avoid acausal theory)
simplified setup
there are two values. one wants to fill the universe with A, and the other with B.
for each of them, filling it halfway is really good, and filling it all the way is just a little bit better. in other words, they are non-linear utility functions.
whichever one comes into existence first can take control of the universe, and fill it with 100% of what they want.
but in theory they’d want to collaborate to guarantee the ‘really good’ (50%) outcome, instead of having a one-in-two chance at the ‘a little better than really good’ (100%) outcome.
they want a way to collaborate, but they can’t because one of them will exist before the other one, and then lack an incentive to help the other one. (they are both pure function maximizers)
how they end up splitting the universe, regardless of which comes first: mutual anthropic capture.
imagine you observe yourself being the first of the two to exist. you reason through all the above, and then add...
they could be simulating me, in which case i’m not really the first.
were that true, they could also expect i might be simulating them
if i don’t simulate them, then they will know that’s not how i would act if i were first, and be absolved of their worry, and fill the universe with their own stuff.
therefor, it’s in my interest to simulate them
both simulate each other observing themselves being the first to exist in order to unilaterally prevent the true first one from knowing they are truly first.
from this point they can both observe each others actions. specifically, they observe each other implementing the same decision policy which fills the universe with half A and half B iff this decision policy is mutually implemented, and which shuts the simulation down if it’s not implemented.
conclusion
in reality there are many possible first entities which take control, not just two, so all of those with non-linear utility functions get simulated.
so, odds are we’re being computed by the ‘true first’ life form in this universe, and that that first life form is in an epistemic state no different from that described here.
This is an awesome idea, thanks! I’m not sure I buy the conclusion, but expect having learned about “mutual anthropic capture” will be usefwl for my thinking on this.