If you were to ask me, at two different random points in time, what odds I would take to live 10^10^10^10 years or die in an hour, and what odds I would take to live 10^10^10^10^10^10 years or die in an hour, you would likely get the same answer. I can identity that one number is bigger than the other, but the difference means about as little to me as the difference between a billion dollars and a billion and one dollars.
At some point, it simply doesn’t matter how much you increase the payoff, I won’t take the new bet no matter how little you increase the odds against me. Where that point lies is arbitrary in the same sense as any other point where the utility of two different events times their respective probabilities balance out.
I think this is equivalent to my comment below about patching the utility function, but more pithily expressed. The difficulty lies in trying to reconcile human intuition, which deals well with numbers up to 7 or so, with actual math. If we could intuitively feel the difference between 10^10^10 and 10^10^10^10, in the same way we feel the difference between 5 and 6, we might well accept Omega’s offers all the way down, and might even be justified in doing so. But in fact we don’t, so we’ll only go down the garden path until the point where the difference between the current probability and the original 80% becomes intuitively noticeable; and then either stop, or demand the money back. The paradox is that the problem has two sets of numbers: One too astronomically large to care about, one that starts out un-feelable but eventually hits the “Hey, I care about that” boundary.
I think the reconciliation, short of modifying oneself to feel the astronomically large numbers, is to just accept the flaws in the brain and stop the garden path at an arbitrary point. If Omega complains that I’m not being rational, well, what do I care? I’ve already extracted a heaping big pile of utilons that are quite real according to my actual utility function.
I disagree that it’s a flaw. Discounting the future, even asymptotically, is a preference statement, not a logical shortcoming. Consider this situation:
Omega offers you two bets, and you must choose one. Bet #1 says you have a 50% chance of dying immediately, and a 50% chance of living 10 average lifespans. Bet #2 says you have a 100% chance of living a single average lifespan.
Having lived a reasonable part of an average lifespan, I can grok these numbers quite well. Still, I would choose Bet #2. Given the opportunity, I wouldn’t modify myself to prefer Bet #1. Moreover, I hope any AI with the power and necessity to choose one of these bets for me, would choose Bet #2.
Yes, fair enough; I should have said “accept the way the brain currently works” rather than using loaded language—apparently I’m not quite following my own prescription. :)
If you were to ask me, at two different random points in time, what odds I would take to live 10^10^10^10 years or die in an hour, and what odds I would take to live 10^10^10^10^10^10 years or die in an hour, you would likely get the same answer. I can identity that one number is bigger than the other, but the difference means about as little to me as the difference between a billion dollars and a billion and one dollars.
At some point, it simply doesn’t matter how much you increase the payoff, I won’t take the new bet no matter how little you increase the odds against me. Where that point lies is arbitrary in the same sense as any other point where the utility of two different events times their respective probabilities balance out.
I think this is equivalent to my comment below about patching the utility function, but more pithily expressed. The difficulty lies in trying to reconcile human intuition, which deals well with numbers up to 7 or so, with actual math. If we could intuitively feel the difference between 10^10^10 and 10^10^10^10, in the same way we feel the difference between 5 and 6, we might well accept Omega’s offers all the way down, and might even be justified in doing so. But in fact we don’t, so we’ll only go down the garden path until the point where the difference between the current probability and the original 80% becomes intuitively noticeable; and then either stop, or demand the money back. The paradox is that the problem has two sets of numbers: One too astronomically large to care about, one that starts out un-feelable but eventually hits the “Hey, I care about that” boundary.
I think the reconciliation, short of modifying oneself to feel the astronomically large numbers, is to just accept the flaws in the brain and stop the garden path at an arbitrary point. If Omega complains that I’m not being rational, well, what do I care? I’ve already extracted a heaping big pile of utilons that are quite real according to my actual utility function.
I disagree that it’s a flaw. Discounting the future, even asymptotically, is a preference statement, not a logical shortcoming. Consider this situation:
Omega offers you two bets, and you must choose one. Bet #1 says you have a 50% chance of dying immediately, and a 50% chance of living 10 average lifespans. Bet #2 says you have a 100% chance of living a single average lifespan.
Having lived a reasonable part of an average lifespan, I can grok these numbers quite well. Still, I would choose Bet #2. Given the opportunity, I wouldn’t modify myself to prefer Bet #1. Moreover, I hope any AI with the power and necessity to choose one of these bets for me, would choose Bet #2.
Yes, fair enough; I should have said “accept the way the brain currently works” rather than using loaded language—apparently I’m not quite following my own prescription. :)