I’m reposting some old posts that I originally sent to the “decision theory workshop” mailing list and the “everythinglist”, because I occasionally want to reference these posts but the former mailing list is private and the latter one is public but I can’t figure out how to create direct links to posts that are viewable without becoming a member.
UDT2 is a decision theory idea that I came up with to try to solve some problems in UDT1.1 however I’m not very happy with it currently. UD+ASSA or UDASSA is an anthropic reasoning idea that I came up with and then moved away from prior to UDT. See also this post for further discussion of UDASSA.
UDT2 (originally “toward a solution of the ‘unintentional simulation’ problem”, 1/25/2011)
(I think this approach potentially solves several problems besides “unintentional simulation” but I’ll start there since it provides the clearest motivation.)
Two UDT1 (or UDT1.1) agents play oneshot PD. It’s common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks. While B is trying to derive the logical consequences of returning ‘C’ or ‘D’ on the world program P, it is likely to come up with a proof by simulation of A’s output, after which it will decide to play D.
I think what A should have done is (if it were running a smarter decision theory), instead of deciding directly on C or D, modify itself into a program K = “simulate the original agents A and B and output ‘C’ if and only if both of the simulated agents selfmodify into K within some time limit”.
And B (if it were also running a smarter decision theory) would also selfmodify into K, whether or not it happens to simulate A’s decision to selfmodify into K prior to its own selfmodification, and do this before the time limit built into K expires.
So that’s my starting intuition, and I want to try to answer: what is this smarter decision theory? It seems that at least two changes need to be made to UDT1:
An agent must take the space of possible decisions to be the set of possible programs it can selfmodify into, instead of the set of outputs or input/output maps. (This change is needed anyway if we want the agent to be able to selfimprove in general.)
An agent must consider not just the consequences of eventually reaching some decision, but also the consequences of the amount of time it spends on that decision. (This change is needed anyway if we want the agent to be economical with its computational resources.)
So, while UDT1 optimizes over possible outputs to its input and UDT1.1 optimizes over possible input/output mappings it could implement, UDT2 simultaneously optimizes over possible programs to selfmodify into and the amount of time (in computation steps) to spend before selfmodification.
How to formulate UDT2 more precisely is not entirely clear yet. Assuming the existence of a math intuition module which runs continuously to refine its logical uncertainties, one idea is to periodically interrupt it, and during the interrupt, ask it about the logical consequences of statements of the form “S, upon input X, becomes T at time t” for all programs T and t being the time at the end of the current interrupt. At the end of the interrupt, return T(X) for the T that has the highest expected utility according to the math intuition module’s “beliefs”. (One of these Ts should be equivalent to “let the math intuition module run for another period and ask again later”.)
Suppose agents A and B above are running UDT2 instead of UDT1. It seems plausible that A would decide to selfmodify into K, in which case B would not suffer from the “unintentional simulation” problem, since if it does prove that A selfmodifies into K, it can then easily prove that if B does not selfmodify into K within K’s time limit, A will play D, and therefore “B becomes K at time t” is the best choice for some t.
It also seems that UDT2 is able to solve the problem that motivated UDT1.1 without having “ignore the input until the end” hardcoded into it, which perhaps makes it a better departure point than UDT1.1 for thinking about bargaining problems. Recall that problem was:
Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.
The idea here is that both agents, running UDT2, would selfmodify into T = “return A if input is 1, otherwise return B” if their math intuition modules say that “S, upon input 1, becomes T” is positively correlated with “S, upon input 2, becomes T”, which seems reasonable to assume.
I think UDT2 also correctly solves Gary’s AgentSimulatesPredictor
problem and my “two more challenging Newcomb variants”.
(I’ll skip the details unless someone asks.)
To me, this seems to be the most promising approach to try to fix some of UDT1′s problems. I’m curious if others agree/disagree, or if anyone is working on other ideas.
two more challenging Newcomb variants (4/12/2010)
On Apr 11, 2:45 pm, Vladimir Nesov wrote:
There, I need the environment to be presented as function of the
agent’s strategy. Since predictor is part of agent’s environment, it
has to be seen as function of the agent’s strategy as well, not as
function of the agent’s source code.
It’s doesn’t seem possible, in general, to represent the environment as a function of the agent’s strategy. I applied Gary’s trick of converting multiagent problems into Newcomb variants to come up with two more singleagent problems that UDT1 (and perhaps Nesov’s formulation of UDT as well) does badly on.
In the first Newcomb variant, Omega says he used a predictor that did an exact simulation of you for 10^100 ticks and outputs “onebox” if and only if the simulation outputs “onebox” within 10^100 ticks.
While actually making the decision, you are given 10^200 free ticks.
In the second example (which is sort of the opposite of the above), Omega shows you a million boxes, and you get to choose one. He says he used 10^100 ticks and whatever computational shortcuts he could find to predict your decision, and put $1 million in every box except the one he predicted you would choose. You get 10^100 + 10^50 ticks to make your decision, but you don’t get a copy of Omega’s predictor’s source code.
In these two examples, the actual decision is not more important than how predictable or unpredictable the computation that leads to the decision is. More generally, it seems that many properties of the decision computation might affect the environment (in a way that needs to be taken into account) besides its final output.
At this point, I’m not quite sure if UDT1 fails on these two problems for the same reason it fails on Gary’s problem. In both my first problem and Gary’s problem, UDT1 seems to spend too long “thinking”
before making a decision, but that might just be a superficial similarity.
against UD+ASSA, part 1 (9/26/2007)
I promised to summarize why I moved away from the philosophical position
that Hal Finney calls UD+ASSA. Here’s part 1, where I argue against ASSA.
Part 2 will cover UD.
Consider the following thought experiment. Suppose your brain has been
destructively scanned and uploaded into a computer by a mad scientist. Thus
you find yourself imprisoned in a computer simulation. The mad scientist
tells you that you have no hope of escaping, but he will financially support
your survivors (spouse and children) if you win a certain game, which works
as follows. He will throw a fair 10sided die with sides labeled 0 to 9. You
are to guess whether the die landed with the 0 side up or not. But here’s a
twist, if it does land with “0” up, he’ll immediately make 90 duplicate
copies of you before you get a chance to answer, and the copies will all run
in parallel. All of the simulations are identical and deterministic, so all
91 copies (as well as the 9 copies in the other universes) must give the
same answer.
ASSA implies that just before you answer, you should think that you have
0.91 probability of being in the universe with “0” up. Does that mean you
should guess “yes”? Well, I wouldn’t. If I was in that situation, I’d think
“If I answer ‘no’ my survivors are financially supported in 9 times as many
universes as if I answer ‘yes’, so I should answer ‘no’.” How many copies of
me exist in each universe doesn’t matter, since it doesn’t affect the
outcome that I’m interested in.
Notice that in this thought experiment my reasoning mentions nothing about
probabilities. I’m not interested in “my” measure, but in the measures
of the outcomes that I care about. I think ASSA holds intuitive appeal to
us, because historically, copying of minds isn’t possible, so the measure of
one’s observermoment and the measures of the outcomes that are causally
related to one’s decisions are strictly proportional. In that situation, it
makes sense to continue to think in terms of subjective probabilities
defined as ratios of measures of observermoments. But in the more general
case, ASSA doesn’t hold up.
against UD+ASSA, part 2 (9/26/2007)
In part one I argued against ASSA. Here I first summarize my
argument against UD, then against the general possibility of any single
objective measure.
There is an infinite number of universal Turing machines, so there
is an infinite number of UD. If we want to use one UD as an objective
measure, there has to be a universal Turing machine that is somehow uniquely
suitable for this purpose. Why that UTM and not some other? We don’t even
know what that justification might look like.
Computation is just a small subset of math. I knew this was the case,
having learned about oracle machines in my theory of computation class. But
I didn’t realize just how small a subset until I read Theory of Recursive
Functions and Effective Computability, by Hartley Rogers. Given that there
is so much mathematical structure outside of computation, why should they
not exist? How can we be sure that they don’t exist? If we are not sure,
then we have to take the possibility of their existence into account when
making decisions, in which case we still need a measure in which they have
nonzero measures.
At this point I started looking for another measure that can replace UD.
I came up with what I called “set theoretic universal measure”, where the
measure of a set is inversely related to the length of its description in a
formal set theory. Set theory covers a lot more math, but otherwise we still
have the same problems. Which formal set theory do we use? And how can we be
sure that all structures that can possibly exist possible can be formalized
as sets? (An example of something that can’t would be a device that can
decide the truth value of any set theoretic statement.)
Besides the lack of good candidates, the demise of ASSA means we don’t
need an objective measure anymore. There is no longer an issue of sampling,
so we don’t need an objective measure to sample from. The thought experiment
in part 1 of “against UD+ASSA” points out that in general, it’s not the
measure of one’s observermoment that matters, but the measures of the
outcomes that are causally related to one’s decisions. Those measures
can be interpreted as indications of how much one cares about the outcomes,
and therefore can be subjective.
So where does this chain of thought lead us? I think UD+ASSA, while flawed,
can serve as a kind of stepping stone towards a more general rationality.
Somehow UD+ASSA is more intuitively appealing, whereas truly generalized
rationality looks very alien to us. I’m not sure any of us can really
practice the latter, even if we can accept it philosophically. But perhaps
our descendents
can. One danger I see with UD+ASSA is we’ll program it into an AI, and the
AI will be forever stuck with the idea that noncomputable phenomenon can’t
exist,
no matter what evidence it might observe.
This post caused me to read up on UD+ASSA, which helped me make sense of some ideas that were bouncing around in my head for a long time. Hopefully my thoughts on it make sense to others here.
against UD+ASSA, part 1 (9/26/2007) [bet on d10 rolling a zero or notzero, but you’ll be copied 91 times if it lands on zero...]
I think under UD+ASSA, having exact copies made doesn’t necessarily increase your measure, which would mostly sidestep this problem. But I think it’s still conceptually possible to have situations under UD+ASSA that increase one’s measure, so the rest of my post here assumes that the madman copies you in some kind of measureincreasing rather than a measuresplitting way.
This scenario doesn’t seem like a contradiction with UD+ASSA if you believe that the probability that 0 would be a good answer based on the outcome to precommit to saying (10%) does not need to equal to the subjective probability that you will see 0 as the answer (91%). The fact that the subjective probability doesn’t line up with the way that you should answer in order to get a certain outcome doesn’t need to mean that the subjective probability doesn’t exist or is invalid. The chance that 0 is a good answer to precommit to (10%) is equal to the madman’s and your family’s subjective probability that 0 ends up being the answer (10%). I think Quantum Mechanics and maybe also the Anthropic Trilemma imply that different people can have different subjective probabilities and have different proportions of their measure go to different results, and UD+ASSA seems to be compatible with that in my understanding.
The madman is just putting the player in a cruel situation: you can bet on 0 and have most of your measure and a minority of everyone else’s measure go to the outcome where your family benefits, or you can bet on not0 and have a minority of your measure and a majority of everyone else’s measure go to the outcome where your family benefits. This situation is made a little easier to reason about by the detail that you won’t get to personally experience and interact with the outcome of your family benefiting, so it feels somewhat obvious to prioritize everyone else’s measure in that outcome rather than your own measure in that outcome. Reasoning about preferences in situations where different people have different measures over the outcomes feels extremely unintuitive and paints a very alien picture of reality, but I don’t think it’s ruled out.
One aspect of UD+ASSA that is weird is that the UD is uncomputable itself. This seems to contradict the notion of assuming that everything is computable, although maybe there is a special justification that can be given for this?
I don’t think the the 0.91 probability is necessarily incorrect. You just have to remember that as long as you care about your family and not your experience of knowing your family is looked after, you only get paid out once in the universe, not once per copy.
Two UDT1 (or UDT1.1) agents play oneshot PD. It’s common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks
What does it mean when it’s said that a decision theory is running in bounded time?
I think it means the decision is being made by a machine/person/algorithm/computer, in, if not a reasonable amount of time*, then at least a finite amount of time.
*If you’re playing chess, then there’s probably a limit on how long you’re willing to wait for the other person to make a move.
In the mad scientist example, why would your measure for the die landing 0 be 0.91? I think Solomonoff Induction would assign probability 0.1 to that outcome, because you need an extra log2(90) bits to specify which clone you are. Or is this just meant to illustrate a problem with ASSA, UD not included?
I think UDT2 also correctly solves Gary’s AgentSimulatesPredictor problem and my “two more challenging Newcomb variants”. (I’ll skip the details unless someone asks.)

I applied Gary’s trick of converting multiagent problems into Newcomb variants to come up with two more singleagent problems that UDT1 (and perhaps Nesov’s formulation of UDT as well) does badly on.
I’m curious about both of these, as well who Gary is.
“UDT2” and “against UD+ASSA”
I’m reposting some old posts that I originally sent to the “decision theory workshop” mailing list and the “everythinglist”, because I occasionally want to reference these posts but the former mailing list is private and the latter one is public but I can’t figure out how to create direct links to posts that are viewable without becoming a member.
UDT2 is a decision theory idea that I came up with to try to solve some problems in UDT1.1 however I’m not very happy with it currently. UD+ASSA or UDASSA is an anthropic reasoning idea that I came up with and then moved away from prior to UDT. See also this post for further discussion of UDASSA.
UDT2 (originally “toward a solution of the ‘unintentional simulation’ problem”, 1/25/2011)
(I think this approach potentially solves several problems besides “unintentional simulation” but I’ll start there since it provides the clearest motivation.)
I first described this problem (without naming it) at http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/120y. Here’s a condensed version:
Two UDT1 (or UDT1.1) agents play oneshot PD. It’s common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks. While B is trying to derive the logical consequences of returning ‘C’ or ‘D’ on the world program P, it is likely to come up with a proof by simulation of A’s output, after which it will decide to play D.
I think what A should have done is (if it were running a smarter decision theory), instead of deciding directly on C or D, modify itself into a program K = “simulate the original agents A and B and output ‘C’ if and only if both of the simulated agents selfmodify into K within some time limit”. And B (if it were also running a smarter decision theory) would also selfmodify into K, whether or not it happens to simulate A’s decision to selfmodify into K prior to its own selfmodification, and do this before the time limit built into K expires.
So that’s my starting intuition, and I want to try to answer: what is this smarter decision theory? It seems that at least two changes need to be made to UDT1:
An agent must take the space of possible decisions to be the set of possible programs it can selfmodify into, instead of the set of outputs or input/output maps. (This change is needed anyway if we want the agent to be able to selfimprove in general.)
An agent must consider not just the consequences of eventually reaching some decision, but also the consequences of the amount of time it spends on that decision. (This change is needed anyway if we want the agent to be economical with its computational resources.)
So, while UDT1 optimizes over possible outputs to its input and UDT1.1 optimizes over possible input/output mappings it could implement, UDT2 simultaneously optimizes over possible programs to selfmodify into and the amount of time (in computation steps) to spend before selfmodification.
How to formulate UDT2 more precisely is not entirely clear yet. Assuming the existence of a math intuition module which runs continuously to refine its logical uncertainties, one idea is to periodically interrupt it, and during the interrupt, ask it about the logical consequences of statements of the form “S, upon input X, becomes T at time t” for all programs T and t being the time at the end of the current interrupt. At the end of the interrupt, return T(X) for the T that has the highest expected utility according to the math intuition module’s “beliefs”. (One of these Ts should be equivalent to “let the math intuition module run for another period and ask again later”.)
Suppose agents A and B above are running UDT2 instead of UDT1. It seems plausible that A would decide to selfmodify into K, in which case B would not suffer from the “unintentional simulation” problem, since if it does prove that A selfmodifies into K, it can then easily prove that if B does not selfmodify into K within K’s time limit, A will play D, and therefore “B becomes K at time t” is the best choice for some t.
It also seems that UDT2 is able to solve the problem that motivated UDT1.1 without having “ignore the input until the end” hardcoded into it, which perhaps makes it a better departure point than UDT1.1 for thinking about bargaining problems. Recall that problem was:
Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.
The idea here is that both agents, running UDT2, would selfmodify into T = “return A if input is 1, otherwise return B” if their math intuition modules say that “S, upon input 1, becomes T” is positively correlated with “S, upon input 2, becomes T”, which seems reasonable to assume.
I think UDT2 also correctly solves Gary’s AgentSimulatesPredictor problem and my “two more challenging Newcomb variants”. (I’ll skip the details unless someone asks.)
To me, this seems to be the most promising approach to try to fix some of UDT1′s problems. I’m curious if others agree/disagree, or if anyone is working on other ideas.
two more challenging Newcomb variants (4/12/2010)
On Apr 11, 2:45 pm, Vladimir Nesov wrote:
It’s doesn’t seem possible, in general, to represent the environment as a function of the agent’s strategy. I applied Gary’s trick of converting multiagent problems into Newcomb variants to come up with two more singleagent problems that UDT1 (and perhaps Nesov’s formulation of UDT as well) does badly on.
In the first Newcomb variant, Omega says he used a predictor that did an exact simulation of you for 10^100 ticks and outputs “onebox” if and only if the simulation outputs “onebox” within 10^100 ticks. While actually making the decision, you are given 10^200 free ticks.
In the second example (which is sort of the opposite of the above), Omega shows you a million boxes, and you get to choose one. He says he used 10^100 ticks and whatever computational shortcuts he could find to predict your decision, and put $1 million in every box except the one he predicted you would choose. You get 10^100 + 10^50 ticks to make your decision, but you don’t get a copy of Omega’s predictor’s source code.
In these two examples, the actual decision is not more important than how predictable or unpredictable the computation that leads to the decision is. More generally, it seems that many properties of the decision computation might affect the environment (in a way that needs to be taken into account) besides its final output.
At this point, I’m not quite sure if UDT1 fails on these two problems for the same reason it fails on Gary’s problem. In both my first problem and Gary’s problem, UDT1 seems to spend too long “thinking” before making a decision, but that might just be a superficial similarity.
against UD+ASSA, part 1 (9/26/2007)
I promised to summarize why I moved away from the philosophical position that Hal Finney calls UD+ASSA. Here’s part 1, where I argue against ASSA. Part 2 will cover UD.
Consider the following thought experiment. Suppose your brain has been destructively scanned and uploaded into a computer by a mad scientist. Thus you find yourself imprisoned in a computer simulation. The mad scientist tells you that you have no hope of escaping, but he will financially support your survivors (spouse and children) if you win a certain game, which works as follows. He will throw a fair 10sided die with sides labeled 0 to 9. You are to guess whether the die landed with the 0 side up or not. But here’s a twist, if it does land with “0” up, he’ll immediately make 90 duplicate copies of you before you get a chance to answer, and the copies will all run in parallel. All of the simulations are identical and deterministic, so all 91 copies (as well as the 9 copies in the other universes) must give the same answer.
ASSA implies that just before you answer, you should think that you have 0.91 probability of being in the universe with “0” up. Does that mean you should guess “yes”? Well, I wouldn’t. If I was in that situation, I’d think “If I answer ‘no’ my survivors are financially supported in 9 times as many universes as if I answer ‘yes’, so I should answer ‘no’.” How many copies of me exist in each universe doesn’t matter, since it doesn’t affect the outcome that I’m interested in.
Notice that in this thought experiment my reasoning mentions nothing about probabilities. I’m not interested in “my” measure, but in the measures of the outcomes that I care about. I think ASSA holds intuitive appeal to us, because historically, copying of minds isn’t possible, so the measure of one’s observermoment and the measures of the outcomes that are causally related to one’s decisions are strictly proportional. In that situation, it makes sense to continue to think in terms of subjective probabilities defined as ratios of measures of observermoments. But in the more general case, ASSA doesn’t hold up.
against UD+ASSA, part 2 (9/26/2007)
In part one I argued against ASSA. Here I first summarize my argument against UD, then against the general possibility of any single objective measure.
There is an infinite number of universal Turing machines, so there is an infinite number of UD. If we want to use one UD as an objective measure, there has to be a universal Turing machine that is somehow uniquely suitable for this purpose. Why that UTM and not some other? We don’t even know what that justification might look like.
Computation is just a small subset of math. I knew this was the case, having learned about oracle machines in my theory of computation class. But I didn’t realize just how small a subset until I read Theory of Recursive Functions and Effective Computability, by Hartley Rogers. Given that there is so much mathematical structure outside of computation, why should they not exist? How can we be sure that they don’t exist? If we are not sure, then we have to take the possibility of their existence into account when making decisions, in which case we still need a measure in which they have nonzero measures.
At this point I started looking for another measure that can replace UD. I came up with what I called “set theoretic universal measure”, where the measure of a set is inversely related to the length of its description in a formal set theory. Set theory covers a lot more math, but otherwise we still have the same problems. Which formal set theory do we use? And how can we be sure that all structures that can possibly exist possible can be formalized as sets? (An example of something that can’t would be a device that can decide the truth value of any set theoretic statement.)
Besides the lack of good candidates, the demise of ASSA means we don’t need an objective measure anymore. There is no longer an issue of sampling, so we don’t need an objective measure to sample from. The thought experiment in part 1 of “against UD+ASSA” points out that in general, it’s not the measure of one’s observermoment that matters, but the measures of the outcomes that are causally related to one’s decisions. Those measures can be interpreted as indications of how much one cares about the outcomes, and therefore can be subjective.
So where does this chain of thought lead us? I think UD+ASSA, while flawed, can serve as a kind of stepping stone towards a more general rationality. Somehow UD+ASSA is more intuitively appealing, whereas truly generalized rationality looks very alien to us. I’m not sure any of us can really practice the latter, even if we can accept it philosophically. But perhaps our descendents can. One danger I see with UD+ASSA is we’ll program it into an AI, and the AI will be forever stuck with the idea that noncomputable phenomenon can’t exist, no matter what evidence it might observe.
This post caused me to read up on UD+ASSA, which helped me make sense of some ideas that were bouncing around in my head for a long time. Hopefully my thoughts on it make sense to others here.
I think under UD+ASSA, having exact copies made doesn’t necessarily increase your measure, which would mostly sidestep this problem. But I think it’s still conceptually possible to have situations under UD+ASSA that increase one’s measure, so the rest of my post here assumes that the madman copies you in some kind of measureincreasing rather than a measuresplitting way.
This scenario doesn’t seem like a contradiction with UD+ASSA if you believe that the probability that 0 would be a good answer based on the outcome to precommit to saying (10%) does not need to equal to the subjective probability that you will see 0 as the answer (91%). The fact that the subjective probability doesn’t line up with the way that you should answer in order to get a certain outcome doesn’t need to mean that the subjective probability doesn’t exist or is invalid. The chance that 0 is a good answer to precommit to (10%) is equal to the madman’s and your family’s subjective probability that 0 ends up being the answer (10%). I think Quantum Mechanics and maybe also the Anthropic Trilemma imply that different people can have different subjective probabilities and have different proportions of their measure go to different results, and UD+ASSA seems to be compatible with that in my understanding.
The madman is just putting the player in a cruel situation: you can bet on 0 and have most of your measure and a minority of everyone else’s measure go to the outcome where your family benefits, or you can bet on not0 and have a minority of your measure and a majority of everyone else’s measure go to the outcome where your family benefits. This situation is made a little easier to reason about by the detail that you won’t get to personally experience and interact with the outcome of your family benefiting, so it feels somewhat obvious to prioritize everyone else’s measure in that outcome rather than your own measure in that outcome. Reasoning about preferences in situations where different people have different measures over the outcomes feels extremely unintuitive and paints a very alien picture of reality, but I don’t think it’s ruled out.
One aspect of UD+ASSA that is weird is that the UD is uncomputable itself. This seems to contradict the notion of assuming that everything is computable, although maybe there is a special justification that can be given for this?
I don’t think the the 0.91 probability is necessarily incorrect. You just have to remember that as long as you care about your family and not your experience of knowing your family is looked after, you only get paid out once in the universe, not once per copy.
What does it mean when it’s said that a decision theory is running in bounded time?
I think it means the decision is being made by a machine/person/algorithm/computer, in, if not a reasonable amount of time*, then at least a finite amount of time.
*If you’re playing chess, then there’s probably a limit on how long you’re willing to wait for the other person to make a move.
In the mad scientist example, why would your measure for the die landing 0 be 0.91? I think Solomonoff Induction would assign probability 0.1 to that outcome, because you need an extra log2(90) bits to specify which clone you are. Or is this just meant to illustrate a problem with ASSA, UD not included?
I’m curious about both of these, as well who Gary is.
Gary is Gary Drescher, who contributed to decision theory research on LW and on the “decision theory workshop” mailing list.