What is your degree of subjective credence—your posterior probability—that the logical coin came up 1?
. . .
(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be “50%”.)
If the question was, “What odds should you bet at?”, it could be answered using your values. Suppose each copy of you has $1000, and copies of you in a red room are offered a bet that costs $1000 and pays $1001 if the Nth bit of pi is 0. Which do you prefer:
To refuse the bet?
With 50% subjective logical probability, the Nth bit of pi will be 0 and you will have $1,000 per copy.
With 50% subjective logical probability, the Nth bit of pi will be 1 and you will have $1,000 per copy.
To take the bet?
With 50% subjective logical probability, the Nth bit of pi will be 0 and you will have $
1,000.999 999 999 per copy.
With 50% subjective logical probability, the Nth bit of pi will be 1 and you will have $
999.999 999 per copy.
But the question is “What is your posterior probability”? This is not a decision problem, so I don’t know that it has an answer.
I think it may be natural to ask instead: “Given that your learned cognitive system of rational prediction is competing for influence over anticipations used in making decisions, in a brain which awards influence over anticipation to different cognitive systems depending on the success of their past reported predictions, which probability should your rational prediction system report to the brain’s anticipation-influence-awarding mechanisms?”
Suppose you know the following:
Your brain will use a simple Bayesian mechanism which will treat cognitive systems as hypotheses and award influence using Bayesian updating.
In the future, the competitor cognitive systems to your rational prediction system will make predictions which will cause you to take harmful actions. The less influential the competitor systems will be, the less harmful the actions will be.
The competitor cognitive systems will predict 1:1 probabilities of the experiences of being informed that the Nth digit of pi is 0 or 1.
This question could be answered using your values. Which would you prefer:
In both green rooms and red rooms, to rationally predict 1:1 probabilities of the experiences of being informed that the Nth bit of pi is 0 or 1?
With 50% subjective logical probability, the Nth bit of pi will be 0. There will be 1,000,000,001 copies of you whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄2. The competitor cognitive systems will also have taken a likelihood hit of 1⁄2. The relative influences of the cognitive systems will not change.
With 50% subjective logical probability, the Nth bit of pi will be 1. There will be 1,000,000,001 copies of you whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄2. The competitor cognitive systems will also have taken a likelihood hit of 1⁄2. The relative influences of the cognitive systems will not change.
In red rooms, to rationally predict a 1,000,000,000:1 probability of the experience of being informed that the Nth bit of pi is 0, and in green rooms, to rationally predict a 1,000,000,000:1 probability of the experience of being informed that the Nth bit of pi is 1?
With 50% subjective logical probability, the Nth bit of pi will be 0. There will be 1,000,000,000 copies of you who woke up in red rooms, whose learned cognitive systems for rational prediction took a tiny 1,000,000,000⁄1,000,000,001 likelihood hit. The competitor cognitive systems will have taken a likelihood hit of 1⁄2. In those 1,000,000,000 copies, the relative influences of the cognitive systems will be adjusted by the ratio 2,000,000,000:1,000,000,001. There will also be one copy of you who woke up in a green room, whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄1,000,000,001. In that copy, the relative influences of the cognitive systems will be adjusted by the ratio 2:1,000,000,001.
With 50% subjective logical probability, the Nth bit of pi will be 1. There will be one copy of you who woke up in a red room, whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄1,000,000,001. The competitor cognitive systems will have taken a likelihood hit of 1⁄2. In that copy, the relative influences of the cognitive systems will be adjusted by the ratio 2:1,000,000,001. There will also be 1,000,000,000 copies of you who woke up in a green room, whose learned cognitive systems for rational prediction took a tiny 1,000,000,000⁄1,000,000,001 likelihood hit. In those 1,000,000,000 copies, the relative influences of the cognitive systems will be adjusted by the ratio 2,000,000,000:1,000,000,001.
The answer depends on the starting relative influences and on the details of the function from amounts of non-rational anticipation to amounts of harm. But for perspective, the ratio 2:1,000,000,001 can be reversed with 29.9 copies of the ratio 2,000,000,000:1,000,000,001.
If your copies are being merged, the optimal “rational” prediction would depend on the details of the merging algorithm. If the merging algorithm took the arithmetic mean of the updated influences, the optimal prediction would still depend on the starting relative influences and the harm from non-rational anticipations. But if the merging algorithm multiplicatively combined the likelihood ratios from every copy’s predictions, then the second prediction rule would be optimal.
To make decisions about how to value possibly logically impossible worlds, it may help to imagine that the decision problem will be iterated with the (N+1)th digit of pi, the (N+2)th bit, …
(If the rational prediction system already has complete control of your brain’s anticipations, then there may be no reason to predict anything that does not affect a decision.)
I agree with Steve; we have to take a step back and ask not for probabilities but for decision algorithms that aim to achieve certain goals, then it all makes sense; it has to—based upon materialism, whatever definition of “you” you try to settle upon, “you” is some set of physical objects that behave according to a certain decision algorithm, and given the decision algorithm, “you” will have a well-defined expected future reward.
Let me suggest that for anthropic reasoning, you are not directly calculating expected utility but actually trying to determine priors instead. And this traces back to Occam’s razor and hence complexity measures (complexity prior). Further, it is not probabilities that you are trying to directly manipulate, but degrees of similarity. (i.e which reference class does a given observer fall into? – what is the degree of similarity between given algorithms?). So rather than utility and probability, you are actually trying to manipulate something more basic , i.e., complexity and similarity measures
Suggested analogy:
Complexity (is like) Utility
Similarity (is like ) Probability
Let me suggest that rather than trying to ‘maximize utility’ directly, you should first attempt to ‘minimize complexity’ using a new generalized new form of rationality based on the above analogy (The putative method would be an entirely new type of rationality which subsumes ordinary Bayesian reasoning as a special case). The ‘expected complexity’ (analogous to ‘expected utility’) would be based on a ‘complexity function’ (analogous to ‘utility function’) that combines similarity measures (similiarities between algorithms) with the complexities of given outcomes. The utilities and probabilities would be derived from these calculations (ordinary Bayesian rationality would be derivative rather than fundamental).
If the question was, “What odds should you bet at?”, it could be answered using your values. Suppose each copy of you has $1000, and copies of you in a red room are offered a bet that costs $1000 and pays $1001 if the Nth bit of pi is 0. Which do you prefer:
To refuse the bet?
With 50% subjective logical probability, the Nth bit of pi will be 0 and you will have $1,000 per copy.
With 50% subjective logical probability, the Nth bit of pi will be 1 and you will have $1,000 per copy.
To take the bet?
With 50% subjective logical probability, the Nth bit of pi will be 0 and you will have $ 1,000.999 999 999 per copy.
With 50% subjective logical probability, the Nth bit of pi will be 1 and you will have $ 999.999 999 per copy.
But the question is “What is your posterior probability”? This is not a decision problem, so I don’t know that it has an answer.
I think it may be natural to ask instead: “Given that your learned cognitive system of rational prediction is competing for influence over anticipations used in making decisions, in a brain which awards influence over anticipation to different cognitive systems depending on the success of their past reported predictions, which probability should your rational prediction system report to the brain’s anticipation-influence-awarding mechanisms?”
Suppose you know the following:
Your brain will use a simple Bayesian mechanism which will treat cognitive systems as hypotheses and award influence using Bayesian updating.
In the future, the competitor cognitive systems to your rational prediction system will make predictions which will cause you to take harmful actions. The less influential the competitor systems will be, the less harmful the actions will be.
The competitor cognitive systems will predict 1:1 probabilities of the experiences of being informed that the Nth digit of pi is 0 or 1.
This question could be answered using your values. Which would you prefer:
In both green rooms and red rooms, to rationally predict 1:1 probabilities of the experiences of being informed that the Nth bit of pi is 0 or 1?
With 50% subjective logical probability, the Nth bit of pi will be 0. There will be 1,000,000,001 copies of you whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄2. The competitor cognitive systems will also have taken a likelihood hit of 1⁄2. The relative influences of the cognitive systems will not change.
With 50% subjective logical probability, the Nth bit of pi will be 1. There will be 1,000,000,001 copies of you whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄2. The competitor cognitive systems will also have taken a likelihood hit of 1⁄2. The relative influences of the cognitive systems will not change.
In red rooms, to rationally predict a 1,000,000,000:1 probability of the experience of being informed that the Nth bit of pi is 0, and in green rooms, to rationally predict a 1,000,000,000:1 probability of the experience of being informed that the Nth bit of pi is 1?
With 50% subjective logical probability, the Nth bit of pi will be 0. There will be 1,000,000,000 copies of you who woke up in red rooms, whose learned cognitive systems for rational prediction took a tiny 1,000,000,000⁄1,000,000,001 likelihood hit. The competitor cognitive systems will have taken a likelihood hit of 1⁄2. In those 1,000,000,000 copies, the relative influences of the cognitive systems will be adjusted by the ratio 2,000,000,000:1,000,000,001. There will also be one copy of you who woke up in a green room, whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄1,000,000,001. In that copy, the relative influences of the cognitive systems will be adjusted by the ratio 2:1,000,000,001.
With 50% subjective logical probability, the Nth bit of pi will be 1. There will be one copy of you who woke up in a red room, whose learned cognitive systems for rational prediction took a likelihood hit of 1⁄1,000,000,001. The competitor cognitive systems will have taken a likelihood hit of 1⁄2. In that copy, the relative influences of the cognitive systems will be adjusted by the ratio 2:1,000,000,001. There will also be 1,000,000,000 copies of you who woke up in a green room, whose learned cognitive systems for rational prediction took a tiny 1,000,000,000⁄1,000,000,001 likelihood hit. In those 1,000,000,000 copies, the relative influences of the cognitive systems will be adjusted by the ratio 2,000,000,000:1,000,000,001.
The answer depends on the starting relative influences and on the details of the function from amounts of non-rational anticipation to amounts of harm. But for perspective, the ratio 2:1,000,000,001 can be reversed with 29.9 copies of the ratio 2,000,000,000:1,000,000,001.
If your copies are being merged, the optimal “rational” prediction would depend on the details of the merging algorithm. If the merging algorithm took the arithmetic mean of the updated influences, the optimal prediction would still depend on the starting relative influences and the harm from non-rational anticipations. But if the merging algorithm multiplicatively combined the likelihood ratios from every copy’s predictions, then the second prediction rule would be optimal.
To make decisions about how to value possibly logically impossible worlds, it may help to imagine that the decision problem will be iterated with the (N+1)th digit of pi, the (N+2)th bit, …
(If the rational prediction system already has complete control of your brain’s anticipations, then there may be no reason to predict anything that does not affect a decision.)
I agree with Steve; we have to take a step back and ask not for probabilities but for decision algorithms that aim to achieve certain goals, then it all makes sense; it has to—based upon materialism, whatever definition of “you” you try to settle upon, “you” is some set of physical objects that behave according to a certain decision algorithm, and given the decision algorithm, “you” will have a well-defined expected future reward.
Let me suggest that for anthropic reasoning, you are not directly calculating expected utility but actually trying to determine priors instead. And this traces back to Occam’s razor and hence complexity measures (complexity prior). Further, it is not probabilities that you are trying to directly manipulate, but degrees of similarity. (i.e which reference class does a given observer fall into? – what is the degree of similarity between given algorithms?). So rather than utility and probability, you are actually trying to manipulate something more basic , i.e., complexity and similarity measures
Suggested analogy:
Complexity (is like) Utility Similarity (is like ) Probability
Let me suggest that rather than trying to ‘maximize utility’ directly, you should first attempt to ‘minimize complexity’ using a new generalized new form of rationality based on the above analogy (The putative method would be an entirely new type of rationality which subsumes ordinary Bayesian reasoning as a special case). The ‘expected complexity’ (analogous to ‘expected utility’) would be based on a ‘complexity function’ (analogous to ‘utility function’) that combines similarity measures (similiarities between algorithms) with the complexities of given outcomes. The utilities and probabilities would be derived from these calculations (ordinary Bayesian rationality would be derivative rather than fundamental).
M J Geddes (Black Swan Siren!)