So, according to your definition of “good news” and “bad news”, it might be bad news to find that you’ve made a good decision, and good news to find that you’ve made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don’t make such bad decisions in the future.
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
Whether these copies exist or not, and their measure could depend on details of the lotteries’ implementation. If it’s a classical lottery, all the (reasonable) quantum branches from the point you decided could have the same numbers.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
I want to be careful to distinguish Many-Worlds (MW) branches from theoretical possibilities (with respect to my best theory). Events in MW-branches actually happen. Theoretical possibilities, however, may not. (I say this to clarify my position, which I know differs from yours. I am not here justifying these claims.)
My thought experiment was supposed to be about theoretical possibility, not about what happens in some MW-branches but not others.
But I’ll recast the situation in terms MW-branches, because this is analogous to your scenario in your link. All of the MW-branches very probably exist, and I agree that I ought to care about them without regard to which one “I” am or will be subjectively experiencing.
So, if learning that I played and won the lottery in “my” MW-branch doesn’t significantly change my expectation of the measures of MW-branches in which I play or win, then it is neither good news nor bad news.
However, as wnoise points out, some theoretical possibilities may happen in practically no MW-branches.
This brings us to theoretical possibilities. What are my expected measures of MW-branches in which I play and in which I win? If I learn news N that revises my expected measures in the right way, so that the total utility of all branches is greater, then N is good news. This is the kind of news that I was talking about, news that changes my expectations of which of the various theoretical possibilities are in fact realized.
This is the point at which believing in many worlds and caring about other branches leads to very suspicious way to perceive reality. I know, absurdity heuristic isn’t that much reliable, but still—would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
Caring about other branches doesn’t imply having congruent emotional reactions to beliefs about them. Emotions aren’t preferences.
Emotions are not preferences, but I believe they can’t be completely disentangled. There is something wrong with a person who feels unhappy after learning that the world has changed towards his/her prefered state.
I don’t see how you can effectively apply social standards like “something wrong” to a mind that implements UDT. There are no human minds or non-human minds that I am aware of that perfectly implement UDT. There are no known societies of beings that do. It stands to reason that such a society would seem very other if judged by the social standards of a society composed of standard human minds.
When discussing UDT outcomes you have to work around that part of you that wants to immediately “correct” the outcome by applying non-UDT reasoning.
That “something wrong” was not as much of a social standard, as rather an expression of an intuitive feeling of a contradiction, which I wasn’t able to specify more explicitly. I could anticipate general objections such as yours, however, it would help if you can be more concrete here. The question is whether one can say he prefers the state of world where he dies soon with 99% probability, even if he would be in fact disappointed after realising that it was really going to happen. I think we are now at risk of redefining few words (like preference) to mean something quite different from what they used to mean, which I don’t find good at all.
And by the way, why is this a question of decision theory? There is no decision in the discussed scenario, only a question whether some news can be considered good or bad.
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
I don’t know if this is exactly the kind of thing you’re looking for, but you might like this paper arguing for why many-worlds doesn’t imply quantum immortality and like-minded conclusions based on jumping between branches. (I saw someone cite this a few days ago somewhere on Less Wrong, and I’d give them props here, but can’t remember who they were!)
You’re probably right—going through Mallah’s comment history, I think it might have been this post of his that turned me on to his paper. Thanks Mallah!
Wei_Dai is saying that all the other copies of you that didn’t win lost more than enough utility to make up for it. This is far from a universally accepted utility measure, of course.
Had the money more than made up for it, it would have been rational from a normal expected-utility perspective to play the lottery. My scenario was assuming that, with sufficient computational power, you would know that playing the lottery wasn’t rational.
We’re not disagreeing about the value of the lottery—it was, by stipulation, a losing bet—we are disagreeing about the proper attitude towards the news of having won the lottery.
I don’t think I understand the difference in opinion well enough to discover the origin of it.
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don’t make such bad decisions in the future.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
Whether these copies exist or not, and their measure could depend on details of the lotteries’ implementation. If it’s a classical lottery, all the (reasonable) quantum branches from the point you decided could have the same numbers.
I want to be careful to distinguish Many-Worlds (MW) branches from theoretical possibilities (with respect to my best theory). Events in MW-branches actually happen. Theoretical possibilities, however, may not. (I say this to clarify my position, which I know differs from yours. I am not here justifying these claims.)
My thought experiment was supposed to be about theoretical possibility, not about what happens in some MW-branches but not others.
But I’ll recast the situation in terms MW-branches, because this is analogous to your scenario in your link. All of the MW-branches very probably exist, and I agree that I ought to care about them without regard to which one “I” am or will be subjectively experiencing.
So, if learning that I played and won the lottery in “my” MW-branch doesn’t significantly change my expectation of the measures of MW-branches in which I play or win, then it is neither good news nor bad news.
However, as wnoise points out, some theoretical possibilities may happen in practically no MW-branches.
This brings us to theoretical possibilities. What are my expected measures of MW-branches in which I play and in which I win? If I learn news N that revises my expected measures in the right way, so that the total utility of all branches is greater, then N is good news. This is the kind of news that I was talking about, news that changes my expectations of which of the various theoretical possibilities are in fact realized.
I’m very surprised that this was downvoted. I would appreciate an explanation of the downvote.
This is the point at which believing in many worlds and caring about other branches leads to very suspicious way to perceive reality. I know, absurdity heuristic isn’t that much reliable, but still—would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
Caring about other branches doesn’t imply having congruent emotional reactions to beliefs about them. Emotions aren’t preferences.
Emotions are not preferences, but I believe they can’t be completely disentangled. There is something wrong with a person who feels unhappy after learning that the world has changed towards his/her prefered state.
I don’t see how you can effectively apply social standards like “something wrong” to a mind that implements UDT. There are no human minds or non-human minds that I am aware of that perfectly implement UDT. There are no known societies of beings that do. It stands to reason that such a society would seem very other if judged by the social standards of a society composed of standard human minds.
When discussing UDT outcomes you have to work around that part of you that wants to immediately “correct” the outcome by applying non-UDT reasoning.
That “something wrong” was not as much of a social standard, as rather an expression of an intuitive feeling of a contradiction, which I wasn’t able to specify more explicitly. I could anticipate general objections such as yours, however, it would help if you can be more concrete here. The question is whether one can say he prefers the state of world where he dies soon with 99% probability, even if he would be in fact disappointed after realising that it was really going to happen. I think we are now at risk of redefining few words (like preference) to mean something quite different from what they used to mean, which I don’t find good at all.
And by the way, why is this a question of decision theory? There is no decision in the discussed scenario, only a question whether some news can be considered good or bad.
I don’t know if this is exactly the kind of thing you’re looking for, but you might like this paper arguing for why many-worlds doesn’t imply quantum immortality and like-minded conclusions based on jumping between branches. (I saw someone cite this a few days ago somewhere on Less Wrong, and I’d give them props here, but can’t remember who they were!)
It was Mallah, probably.
You’re probably right—going through Mallah’s comment history, I think it might have been this post of his that turned me on to his paper. Thanks Mallah!
It’s good news because you just gained a big pile of utility last night.
Yes, learning that you’re not very smart when drunk is bad news, but the money more than makes up for.
Wei_Dai is saying that all the other copies of you that didn’t win lost more than enough utility to make up for it. This is far from a universally accepted utility measure, of course.
So Wei_Dai’s saying the money doesn’t more than make up for? That’s clever, but I’m not sure it actually works.
Had the money more than made up for it, it would have been rational from a normal expected-utility perspective to play the lottery. My scenario was assuming that, with sufficient computational power, you would know that playing the lottery wasn’t rational.
We’re not disagreeing about the value of the lottery—it was, by stipulation, a losing bet—we are disagreeing about the proper attitude towards the news of having won the lottery.
I don’t think I understand the difference in opinion well enough to discover the origin of it.
I must have misunderstood you, then. I think that we agree about having a positive attitude toward having won.