I think the framework of RO-AIXI can be modified pretty simply to include memory-tampering.
Here’s how you do it. Say you have an environment E and an RO-AIXI A running in it. You have run the AIXI for a number of steps, and it has a history of observations O. Now we want to alter its memory to have a history of observations O’. This can be implemented in the environment as follows:
1. Create a new AIXI A’, with the same reward function as the original and no memories. Feed it the sequence of observations O’.
2. Run A’ in place of A for the remainder of E. In the course of this execution, A’ will accumulate total reward R. Terminate A’.
3. Give the original AIXI reward R, then terminate it.
This basically captures what it means for AIXI’s memory to be erased. Two AIXI’s are only differentiated from each other by their observations and reward function, so creating a new AIXI which shares a reward function with the original is equivalent to changing the first AIXI’s observations. The new AIXI, A’, will also be able to reason about the possibility that it was produced by such a ‘memory-tampering program’, as this is just another possible RO-Turing machine. In other words it will be able to reason about the possibility that its memory has been altered.
[EDITED: My original comment falsely stated that AIXI-RO avoids dutch-booking, but I no longer think it does. I’ve edited my reasoning below]
As applied to the Sleeping Beauty problem from the paper, I think this WILL be dutch-booked. If we assume it takes one bit to specify heads/tails, and one to specify which day one wakes on, then the agent will have probabilities
1⁄2 Heads,
1⁄4 Tails, wake on Monday
1⁄4 Tails, wake on Tuesday
Since memory-erasure has the effect of creating a new AIXI with no memories, the betting scenario(in section 3.2) of the paper has the structure of either a single AIXI choosing to take a bet, or two copies of the same AIXI playing a two-person game. RO-AIXI plays Nash equilibria in such scenarios. Say the AIXI has taken bet 9. From the perspective of the current AIXI, let p be the probability that it takes bet 10, and let q be the probability that its clone takes bet 10.
This has the structure of a prisoner’s dilemma. In particular, the expected utility of the current AIXI is maximized at p = 1. So both AIXI’s will take the bet and incur a sure loss. On the other hand, for this reason the original AIXI A would not take the bet 9 on Sunday, if given the choice.
Not quite. If taking bet 9 is a prerequisite to taking bet 10, then AIXI won’t take bet 9, but if bet 10 gets offered whether or not bet 9 is accepted, then AIXI will be like “ah, future me will take the bet, and wind up with 10+ϵ in the heads world and −20+2ϵ in the tails world. This is just a given. I’ll take this +15/-15 bet as it has net positive expected value, and the loss in the heads world is more than counterbalanced by the reduction in the magnitude of loss for the tails world”
Something else feels slightly off, but I can’t quite pinpoint it at this point. Still, I guess this solves my question as originally stated, so I’ll PM you for payout. Well done!
(btw, you can highlight a string of text and hit crtl+4 to turn it into math-mode)
I figured out what feels slightly off about this solution. For events like “I have a long memory and accidentally dropped a magnet on it”, it intuitively feels like describing your spot in the environment and the rules of your environment is much lower K-complexity than finding a turing machine/environment that starts by giving you the exact (long) scrambled sequence of memories that you have, and then resumes normal operating.
Although this also feels like something nearby is actually desired behavior. If you rewrite the tape to be describing some other simple environment, you would intuitively expect the AIXI to act as if it’s in the simple environment for a brief time before gaining enough information to conclude that things have changed and rederive the new rules of where it is.
Well, it COULD be the case that the K-complexity of the memory-erased AIXI environment is lower, even when it learns that this happened. The reason for this is that there could be many possible past AIXI’s who have their memory erased/altered and end up in the same subjective situation. Then the memory-erasure hypothesis can use the lowest K-complexity AIXI who ends up with these memories. As the AIXI learns more it can gradually piece together which of the potential past AIXI’s it actually was and the K-complexity will go back up again.
EDIT: Oh, I see you were talking about actually having a RANDOM memory in the sense of a random sequence of 1s and 0s. Yeah, but this is no different than AIXI thinking that any random process is high K-complexity. In general, and discounting merging, the memory-altering subroutine will increase the complexity of the environment by a constant plus the complexity of whatever transformation you want to apply to the memories.
Incidentally, you can use the same idea to have RO-AIXI do anthropic reasoning/bargaining about observers that are in a broader reference class than ‘exact same sense data’, by making the mapping O → O’ some sort of coarse-graining.
I don’t think AIXI needs any special sauce to understand memory wiping. There is a turing machine that writes the memory-wiped contents to tape all in one pass. It’s just going to be a simulation of the universe with a slightly more complicated “bridging law.” Different programs that write the right thing to the tape are on equal footing, no matter whether they describe different universes or different parts of the same universe.
So we might expect an assignment more like P(HM)=0.49, P(TM)=0.49, P(TT)=0.02 (EDIT: fixed) (not sure whether we ahould expect it to be harder to describe the normal Tails branch). And then AIXI will act on this using whatever action it predicts will get the greatest signal in the reward channel.
″ P(HM)=0.49, P(TM)=0.49, P(TT)=0.2 ”—Are these supposed to be mutually exclusive probabilities?
″ There is a turing machine that writes the memory-wiped contents to tape all in one pass. ”—Yes, this is basically what I said. (‘environment’ above could include ‘the world’ + bridging laws). But you also need to alter the reward structure a bit to make it match our usual intuition of what ‘memory-wiping’ means, and this has significance for decision theory.
Consider, if your own memory was erased, you would probably still be concerned about what was going to happen to you later. But a regular AIXI won’t care about what happens to its memory-wiped clone(i.e. another AIXI inducting on the ‘memory-wiped’ stream), because they don’t share an input channel. So to fix this you give the original AIXI all of the rewards that its clone ends up getting.
Good point about caring for yourself even if you expect to lose the memory of (e.g.) the current hour. AIXI only cares about the tapes that are successors of the current one. Maybe expand the tape from write-only to also have some erasing operations?
I think there are probably some other toy problems that illustrate this issue a lot better than Sleeping Beauty, where AIXI equating memory loss with death might not actually change its decisions much in the bet.
I still don’t see how you’re getting those probabilities. Say it takes 1 bit to describe the outcome of the coin toss, and assume it’s easy to find all the copies of yourself(ie your memories) in different worlds. Then you need:
1 bit to specify if the coin landed heads or tails
If the coin landed tails, you need 1 more bit to specify if it’s Monday or Tuesday.
So AIXI would give these scenarios P(HM)=0.50, P(TM)=0.25, P(TT)=0.25.
I’m not thinking of it like specifying parts of the toy problem. I’m thinking of it as if for each of HM, TM, and TT, the observer is about to recieve 2 bits that describe which situation they’re in, and the only object that matters for the probability of each is the shortest program that reproduces all past observations plus the next 2 bits.
If we assume Sleeping Beauty has lots of information, we might expect that the shortest matching program will look like a simulation of physical law, plus a “bridging law” that, given this simulation, tells you what symbols get written to the tape. It is in this context that is seems like HM and TM are equally complex—you’re simulating the same chunk of universe and have what seems like (naively) a similar bridging law. It’s only for tuesday that you obviously need a different program to reproduce the data.
If we assume Sleeping Beauty has lots of information, we might expect that the shortest matching program will look like a simulation of physical law plus a “bridging law” that, given this simulation, tells you what symbols get written to the tape
I agree. I still think that the probabilities would be closer to 1⁄2, 1⁄4, 1⁄4. The bridging law could look like this: search over the universe for compact encodings of my memories so far, then see what is written next onto this encoding. In this case, it would take no more bits to specify waking up on Tuesday, because the memories are identical, in the same format, and just slightly later temporally.
In a naturalized setting, it seems like the tricky part would be getting the AIXI on Monday to care what happens after it goes to sleep. It ‘knows’ that it’s going to lose consciousness(it can see that its current memory encoding is going to be overwritten) so its next prediction is undetermined by its world-model. There is one program that will give it the reward of its successor then terminates, as I described above, but it’s not clear why the AIXI would favour that hypothesis. Maybe if it has been in situations involving memory-wiping before, or has observed other RO-AIXI’s in such situations.
I think the framework of RO-AIXI can be modified pretty simply to include memory-tampering.
Here’s how you do it. Say you have an environment E and an RO-AIXI A running in it. You have run the AIXI for a number of steps, and it has a history of observations O. Now we want to alter its memory to have a history of observations O’. This can be implemented in the environment as follows:
1. Create a new AIXI A’, with the same reward function as the original and no memories. Feed it the sequence of observations O’.
2. Run A’ in place of A for the remainder of E. In the course of this execution, A’ will accumulate total reward R. Terminate A’.
3. Give the original AIXI reward R, then terminate it.
This basically captures what it means for AIXI’s memory to be erased. Two AIXI’s are only differentiated from each other by their observations and reward function, so creating a new AIXI which shares a reward function with the original is equivalent to changing the first AIXI’s observations. The new AIXI, A’, will also be able to reason about the possibility that it was produced by such a ‘memory-tampering program’, as this is just another possible RO-Turing machine. In other words it will be able to reason about the possibility that its memory has been altered.
[EDITED: My original comment falsely stated that AIXI-RO avoids dutch-booking, but I no longer think it does. I’ve edited my reasoning below]
As applied to the Sleeping Beauty problem from the paper, I think this WILL be dutch-booked. If we assume it takes one bit to specify heads/tails, and one to specify which day one wakes on, then the agent will have probabilities
1⁄2 Heads,
1⁄4 Tails, wake on Monday
1⁄4 Tails, wake on Tuesday
Since memory-erasure has the effect of creating a new AIXI with no memories, the betting scenario(in section 3.2) of the paper has the structure of either a single AIXI choosing to take a bet, or two copies of the same AIXI playing a two-person game. RO-AIXI plays Nash equilibria in such scenarios. Say the AIXI has taken bet 9. From the perspective of the current AIXI, let p be the probability that it takes bet 10, and let q be the probability that its clone takes bet 10.
E[u] = 1⁄2 * ( (-15 + 2eps) + p (10 + eps)) + 1⁄2 * ((15 + eps) + p*q *(-20 + 2eps) + p(1 - q)(-10 + eps) + q(1 - p) * (-10 + eps))
= 3⁄2 eps + 1⁄2 * (p * 2 * eps + q(-10 + eps))
This has the structure of a prisoner’s dilemma. In particular, the expected utility of the current AIXI is maximized at p = 1. So both AIXI’s will take the bet and incur a sure loss. On the other hand, for this reason the original AIXI A would not take the bet 9 on Sunday, if given the choice.
Not quite. If taking bet 9 is a prerequisite to taking bet 10, then AIXI won’t take bet 9, but if bet 10 gets offered whether or not bet 9 is accepted, then AIXI will be like “ah, future me will take the bet, and wind up with 10+ϵ in the heads world and −20+2ϵ in the tails world. This is just a given. I’ll take this +15/-15 bet as it has net positive expected value, and the loss in the heads world is more than counterbalanced by the reduction in the magnitude of loss for the tails world”
Something else feels slightly off, but I can’t quite pinpoint it at this point. Still, I guess this solves my question as originally stated, so I’ll PM you for payout. Well done!
(btw, you can highlight a string of text and hit crtl+4 to turn it into math-mode)
I figured out what feels slightly off about this solution. For events like “I have a long memory and accidentally dropped a magnet on it”, it intuitively feels like describing your spot in the environment and the rules of your environment is much lower K-complexity than finding a turing machine/environment that starts by giving you the exact (long) scrambled sequence of memories that you have, and then resumes normal operating.
Although this also feels like something nearby is actually desired behavior. If you rewrite the tape to be describing some other simple environment, you would intuitively expect the AIXI to act as if it’s in the simple environment for a brief time before gaining enough information to conclude that things have changed and rederive the new rules of where it is.
Well, it COULD be the case that the K-complexity of the memory-erased AIXI environment is lower, even when it learns that this happened. The reason for this is that there could be many possible past AIXI’s who have their memory erased/altered and end up in the same subjective situation. Then the memory-erasure hypothesis can use the lowest K-complexity AIXI who ends up with these memories. As the AIXI learns more it can gradually piece together which of the potential past AIXI’s it actually was and the K-complexity will go back up again.
EDIT: Oh, I see you were talking about actually having a RANDOM memory in the sense of a random sequence of 1s and 0s. Yeah, but this is no different than AIXI thinking that any random process is high K-complexity. In general, and discounting merging, the memory-altering subroutine will increase the complexity of the environment by a constant plus the complexity of whatever transformation you want to apply to the memories.
Incidentally, you can use the same idea to have RO-AIXI do anthropic reasoning/bargaining about observers that are in a broader reference class than ‘exact same sense data’, by making the mapping O → O’ some sort of coarse-graining.
I don’t think AIXI needs any special sauce to understand memory wiping. There is a turing machine that writes the memory-wiped contents to tape all in one pass. It’s just going to be a simulation of the universe with a slightly more complicated “bridging law.” Different programs that write the right thing to the tape are on equal footing, no matter whether they describe different universes or different parts of the same universe.
So we might expect an assignment more like P(HM)=0.49, P(TM)=0.49, P(TT)=0.02 (EDIT: fixed) (not sure whether we ahould expect it to be harder to describe the normal Tails branch). And then AIXI will act on this using whatever action it predicts will get the greatest signal in the reward channel.
″ P(HM)=0.49, P(TM)=0.49, P(TT)=0.2 ”—Are these supposed to be mutually exclusive probabilities?
″ There is a turing machine that writes the memory-wiped contents to tape all in one pass. ”—Yes, this is basically what I said. (‘environment’ above could include ‘the world’ + bridging laws). But you also need to alter the reward structure a bit to make it match our usual intuition of what ‘memory-wiping’ means, and this has significance for decision theory.
Consider, if your own memory was erased, you would probably still be concerned about what was going to happen to you later. But a regular AIXI won’t care about what happens to its memory-wiped clone(i.e. another AIXI inducting on the ‘memory-wiped’ stream), because they don’t share an input channel. So to fix this you give the original AIXI all of the rewards that its clone ends up getting.
Oops, that should have been 0.02 :)
Good point about caring for yourself even if you expect to lose the memory of (e.g.) the current hour. AIXI only cares about the tapes that are successors of the current one. Maybe expand the tape from write-only to also have some erasing operations?
I think there are probably some other toy problems that illustrate this issue a lot better than Sleeping Beauty, where AIXI equating memory loss with death might not actually change its decisions much in the bet.
I still don’t see how you’re getting those probabilities. Say it takes 1 bit to describe the outcome of the coin toss, and assume it’s easy to find all the copies of yourself(ie your memories) in different worlds. Then you need:
1 bit to specify if the coin landed heads or tails
If the coin landed tails, you need 1 more bit to specify if it’s Monday or Tuesday.
So AIXI would give these scenarios P(HM)=0.50, P(TM)=0.25, P(TT)=0.25.
I’m not thinking of it like specifying parts of the toy problem. I’m thinking of it as if for each of HM, TM, and TT, the observer is about to recieve 2 bits that describe which situation they’re in, and the only object that matters for the probability of each is the shortest program that reproduces all past observations plus the next 2 bits.
If we assume Sleeping Beauty has lots of information, we might expect that the shortest matching program will look like a simulation of physical law, plus a “bridging law” that, given this simulation, tells you what symbols get written to the tape. It is in this context that is seems like HM and TM are equally complex—you’re simulating the same chunk of universe and have what seems like (naively) a similar bridging law. It’s only for tuesday that you obviously need a different program to reproduce the data.
I agree. I still think that the probabilities would be closer to 1⁄2, 1⁄4, 1⁄4. The bridging law could look like this: search over the universe for compact encodings of my memories so far, then see what is written next onto this encoding. In this case, it would take no more bits to specify waking up on Tuesday, because the memories are identical, in the same format, and just slightly later temporally.
In a naturalized setting, it seems like the tricky part would be getting the AIXI on Monday to care what happens after it goes to sleep. It ‘knows’ that it’s going to lose consciousness(it can see that its current memory encoding is going to be overwritten) so its next prediction is undetermined by its world-model. There is one program that will give it the reward of its successor then terminates, as I described above, but it’s not clear why the AIXI would favour that hypothesis. Maybe if it has been in situations involving memory-wiping before, or has observed other RO-AIXI’s in such situations.