Reflective AIXI and Anthropics

It’s possible to define a version of Solomonoff Induction with Reflective Oracles, that allows an AIXI-like agent to consider hypotheses that include itself or other equally powerful agents, going partway towards addressing naturalized induction issues.

So then a natural question is “what does this partial answer seem to point to for anthropics?”

To figure this out, we’ll be going over a few of the thought experiments in Bostrom’s book about anthropic reasoning, and seeing what Reflective-Oracle AIXI has to say about them.

The following conclusions are very dependent on how many extra bits it takes to encode “same environment, but I’m that other agent over there”, so I’ll be making a lot of assumptions that I can’t prove, such as the most efficient way of encoding an environment being to specify an environment, and then specifying a place in there that the agent interfaces with. This seems unavoidable so far, so I’ll at least make an effort to list out all the implicit assumptions that go into setting up the problems.

As a quick refresher, SSA (self-selection assumption) and SIA (self-indication assumption) work as follows: SSA takes the probability of a world as given and evenly distributes probability mass to being everything in “your reference class” in that particular world. SIA reweights the probability of a world by the number of instances of “things in your reference class” that it contains. In short, SIA has a strong bias in favor of possible worlds/​hypotheses/​turing machines with many instances of you, while SSA doesn’t care about how many instances of you are present in a possible world.

Thought Experiment 1: Incubator

Stage (a): In an otherwise empty world, a machine called “the incubator” kicks into action. It starts by tossing a fair coin. If the coin falls tails then it creates one room and a man with a black beard inside it. If the coin falls heads then it creates two rooms, one with a blackbearded man and one with a white-bearded man. As the rooms are completely dark, nobody knows his beard color. Everybody who’s been created is informed about all of the above. You find yourself in one of the rooms. Question: What should be your credence that the coin fell tails?
Stage (b): A little later, the lights are switched on, and you discover that you have a black beard. Question: What should your credence in Tails be now?

This will be modeled as a machine that represents the environment, that has a bit that is used to determine how the coinflip comes up. Also, in the second case, because there are two possible places where the agent can be hooked up to the environment, another bit is required to specify where the agent is “attached” to the environment. These three cases have minimum description lengths of , , and bits respectively (where is the description length of the environment), so by the universal semimeasure, they have (relative) probability mass of 50%, 25% and 25% respectively.

So, assuming the problem setup actually works this way, the answers are 50% and 67%, respectively. This seems to point towards Reflective-Oracle Solomonoff Induction (RO-SI) doing something like SSA. The intuitive reason why, is because a hypothesis with a bunch of copies of you requires a bunch of extra bits to specify which copy of you the input data stream is coming from, and this cancels out with the increased number of hypotheses where you are in the well-populated world. There may be copies of you in a “world”, but because it requires 50 bits to specify “I’m that copy right there”, each specific hypothesis/​Turing machine of the form “I’m in that world and am also that particular copy” requires 50 extra bits to specify where in the environment the data is being read out from, and receives a probability penalty of , which, when multiplied by the large number of hypotheses of that form, recovers normality.

There are two ways where things get more interesting. One is that, for environments with many observers in your reference class (RO-SI uses as its reference-class all spots in the environment that receive the exact same observation string as is present in its memory), you’ll assign much higher probability to being one of the (fairly few) observers for which specifying their spot in the environment is low K-complexity. It definitely isn’t a uniform distribution over observers in the possible world, it favors observers that are lower-complexity to specify where in the environment they are. A similar effect occurs in logical induction, where there tend to be peaks of trading activity of simple traders, on low-K-complexity days. Sam’s term for this was “Graham’s crackpot”, that there could be a simple trader with a lot of initial mass that just bides its time until some distant low-K-complexity day and screws up the probabilities then (it can’t do so infinitely often, though)

The other point of interest is what this does on the standard counterexamples to SSA.

To begin with, the Doomsday argument is valid for SSA. This doesn’t seem like much of a limitation in practice, because RO-SI uses a very restrictive reference class that in most practical cases includes just the agent itself, and also, because RO-SI is about as powerful as possible when it comes to updating on data, the starting prior would very very quickly be washed out by a maximally-detailed inside view on the probability of extinction using all data that has been acquired so far.

Thought Experiment 2: Adam and Eve

Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery. One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’ theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”

Here’s where the situation gets nifty.

Assume the environment is as follows: There’s the coding of the Turing machine that represents the environment ( bits), the 1 bit that represents “fertile or not”, and the bitstring/​extra data that specifies where Eve is in the environment. ( bits, L for “location”). Eve has been wandering around the Garden of Eden for a bit, and since she’s a hyper-powerful inductor, she’s accumulated enough information to rule out all the other hypotheses that say she’s actually not in the Garden of Eden. So it’s down to two hypotheses that are both encoded by bits, which get equal probability. If we assume a utility function that’s like “+1 reward for sex, −10 reward for creating billions of suffering beings” (if it was for an Eve that wasn’t scope-insensitive, the serpent’s reasoning would fail), the expected utility of sex is , and Eve ignores the serpent.

The specific place that the serpent’s reasoning breaks down is assuming that the probability of being Eve/​difficulty of specifying Eve’s place in the universe goes down/​up when a decision is made that results in the world having a lot more beings in it. It doesn’t work that way.

However, it gets more interesting if you assume everyone in the resulting created world has sense data such that even a hyper-powerful inductor doesn’t know whether or not they are Eve before the fateful decision.

Also, assume that it takes bits to specify any particular person’s location if they’re not Eve. This is a sort of “equally distributed probability” assumption on the future people, that doesn’t restrict things that much. Maybe it’s much easier to point to Eve than some other person, maybe it’s the other way around.

Also assume that everyone’s utility functions are like “+1 for sex, −10 for finding out shortly after sex that you are one of the suffering future beings, or that you created billions of such.”

To begin with the analysis, break the hypothesis space into:

two worlds of bits where Eve is fertile/​infertile, and you are Eve.

and worlds of (it depends) bits where Eve was fertile, sex was had, and you are not Eve. The reason why it’s tricky to say what the description-length of being one of the future agents is, is because it takes fewer bits to encode a world where an agent does a thing in accordance with the laws of math, than it takes to encode a world where an agent does a different thing that they wouldn’t have normally done. In this particular case, it would take bits (S for surgery) to specify “at this particular spot, ignore what Eve would have done and instead substitute in the action “have sex”, and then run things normally”.

So, if Eve definitely has sex, it takes bits to specify one of the future agents. If Eve definitely doesn’t have sex, it takes bits to specify one of the future agents.

Taking these two cases, we can rescale things to get a mass of , , and either or , on the three classes of worlds, respectively. Expected utility calculations will work out the same way if we use these numbers instead of probabilities that add up to 1, because it’s just a scaling on expected utility and the scaling can be moved over the utility function, which is invariant under scale-and-shift. So then, in the first case, expected utility of sex and not-sex becomes:

So sex will be had if . The crossover point occurs approximately at a 30 bit penalty to specify a non-Eve person (and is approximately 1/​billion.) So, if Eve has sex, and assigns less than about a 110 chance to being Eve, it’s a consistent state of affairs. The reasoning is “I’m probably not Eve, and so I’m probably already going to suffer (since I know in advance what my decision is in this case), might as well pick up that +1 utility”

Redoing this analysis for the case where Eve doesn’t have sex, we get that sex will be had if , and in this case, the crossover point occurs approximately at a 30 bit penalty to specify both the non-Eve person and that particular decision intervention. (there can also be consistent solutions where the reflective oracle is perched right on the decision threshold, and randomizes accordingly, but I’ll ignore those for the time being, they don’t change much)

Considering the specific case where the ratios of the probability masses for “I’m Eve” and “I’m not Eve” is less than (in the sex case) and (in the non-sex case), we get a case where the decision made depends on the choice of reflective oracle! If the reflective oracle picks sex, sex is the best decision (by the reasoning “I’m probably not Eve, might as well pick up the +1 utility”). If the reflective oracle picks not-sex, not-sex is the best decision (by the reasoning “I’m likely enough to be Eve (because the non-Eve people live in a lower-probability universe where an intervention on Eve’s action happened), that I won’t chance it with the coinflip on fertility”)

So, RO-AIXI doesn’t exactly fail (as SSA is alleged to) in this case, because there’s a flaw in the Serpent’s reasoning where the difficulty of specifying where you are in the universe doesn’t change when you make a decision that creates a bunch of other agents, and you don’t think you could be those other agents you’re creating.

But if there’s a case where the other agents are subjectively indistinguishable from yourself, and it’s bad for you to create them, but good for them to push the “create” button, there are multiple fixed-points of reasoning that are of the form “I probably press the button, I’m probably a clone, best to press the button” and “I probably don’t press the button, I’m probably not a clone, best to not press the button”.

Another interesting angle on this is that the choice of action has a side-effect of altering the complexity of specifying various universes in the first place, and the decision rule of RO-AIXI doesn’t take this side-effect into account, it only cares about causal consequences of taking a particular action.

The arguments of Lazy Adam, Eve’s Card Trick, and UN++ in Bostrom’s book fail to apply to RO-AIXI by a similar line of reasoning.

Sleeping Beauty, SSA, and CDT:

There’s a possible future issue where, according to this paper, it’s possible to money-pump the combination of SSA and CDT (which RO-AIXI uses), in the Sleeping Beauty experiment. Looking further at this is hindered by the fact that RO-AIXI implicitly presumes that the agent has access to the entire string of past observations that it made, so it doesn’t interact cleanly with any sort of problem that involves amnesia or memory-tampering. I haven’t yet figured out a way around this, so I’m putting up a 500-dollar bounty on an analysis that manages to cram the framework of RO-AIXI into problems that involve amnesia or memory tampering (as a preliminary step to figure out whether the combination of SSA-like behavior and CDT gets RO-AIXI into trouble by the argument in the aforementioned paper).

Takeaways:

RO-AIXI seems to act according to SSA probabilities, although there are several interesting features of it. The first is that it assigns much more probability to embeddings of the agent in the environment that are low K-complexity, it definitely doesn’t assign equal probability to all of them. The second interesting feature is that the reference class that it uses is “spots in the environment that can be interpreted as receiving my exact string of inputs”, the most restrictive one possible. This opens the door to weird embeddings like “The etchings on that rock, when put through this complicated function, map onto my own sense data”, but those sorts of things are rather complex to specify, so they have fairly low probability mass. The third interesting feature is that the probability of being a specific agent in the world doesn’t change when you make a decision that produces a bunch of extra agents, which defuses the usual objections to SSA. The final interesting feature is that making a particular decision can affect the complexity of specifying various environments, and the standard decision procedure doesn’t take this effect into account, permitting multiple fixed-points of behavior.

Also I don’t know how this interacts with dutch-books on Sleeping Beauty because it’s hard to say what RO-AIXI does in cases with amnesia or memory-tampering, and I’d really like to know and am willing to pay 500 dollars for an answer to that.