Hyperreals or some other modification to the standard framework (see discussion of “infinity shades” in Bostrom) are necessary in order to say that a 50% chance of infinite utility is better than a 1/3^^^3 chance of infinite utility.
Sigh, we seem to be talking past each other. You’re talking about choosing which unlikely god jerks you around, and I’m trying to say that it’s eventually time to eat lunch. If you have infinite utilities, how can you ever justify prioritizing something finite and likely, like eating lunch, over something unlikely but infinite? Keeping a few dollars is like eating lunch, so if you can’t rationally decide to eat lunch, the question is which unlikely god you’ll give your money to. I agree that it probably won’t be me.
If you have infinite utilities, how can you ever justify prioritizing something finite and likely, like eating lunch, over something unlikely but infinite?
Why is eating lunch “finite”, given that we have the possibility of becoming gods ourselves, and eating lunch makes that possibility more likely (compared to not eating lunch)?
ETA: Suppose you saw convincing evidence that skipping lunch would make you more productive at FAI-building (say there’s an experiment showing that lunch makes people mentally slow in the afternoon), you would start skipping lunch, right? Even if you wouldn’t, would it be irrational for someone else to do so?
There are two issues here: 1) What the most plausible cashing-out of an unbounded utility function recommends 2) Whether that cashing-out is a sensible summary of someone’s values. I agree with you on 2) but think that you are giving bogus examples for 1). As with previous posts, if you concoct examples that have many independent things wrong with them, they don’t clearly condemn any particular component.
My understanding is that you want to say, with respect to 2), that you don’t want to act in accord with any such cashing-out, i.e. that your utility function is bounded insofar as you have one. Fine with me, I would say my own utility function is bounded too (although some of the things I assign finite utility to involve infinite amounts of stuff, e.g. I would prefer living forever to living 10,000 years, although boundedly so). Is that right?
But you also keep using what seem to be mistaken cashing-outs in response to 1). For instance, you say that:
Keeping a few dollars is like eating lunch, so if you can’t rationally decide to eat lunch,
But any decision theory/prior/utility function combination that gives in to Pascal’s Mugging will also recommend eating lunch (if you don’t eat lunch you will be hungry and have reduced probability of gaining your aims, whether infinite or finite). Can we agree on that?
If we can, then you should use examples where a bounded utility function and an unbounded utility function actually give conflicting recommendations about which action to take. As far as I can see, you haven’t done so yet.
I think you mean Arguments for—and against—probabilism. If you meant something else, please correct me.
I meant the paper that I already linked to earlier in this thread.
There are two issues here: 1) What the most plausible cashing-out of an unbounded utility function recommends 2) Whether that cashing-out is a sensible summary of someone’s values. I agree with you on 2) but think that you are giving bogus examples for 1). As with previous posts, if you concoct examples that have many independent things wrong with them, they don’t clearly condemn any particular component.
I agree that we agree on 2).
The conflict here seems to be that you’re trying to persist and do math after getting unbounded utilities, and I’m inclined to look at ridiculous inputs and outputs from the decision making procedure and say “See? It’s broken. Don’t do that!”. In this case the ridiculous input is a guess about the odds of me being god, and the ridiculous output is to send me money, or divert resources to some other slightly less unlikely god if I don’t win the contest.
But any decision theory/prior/utility function combination that gives in to Pascal’s Mugging will also recommend eating lunch (if you don’t eat lunch you will be hungry and have reduced probability of gaining your aims, whether infinite or finite). Can we agree on that?
Maybe. I don’t know what it would conclude about eating lunch. Maybe the decision would be to eat lunch, or maybe some unknown interaction of the guesses about the unlikely gods would lead to performing bizarre actions to satisfy whichever of them seemed more likely than the others. Maybe there’s a reason people don’t trust fanatics.
If we can, then you should use examples where a bounded utility function and an unbounded utility function actually give conflicting recommendations about which action to take. As far as I can see, you haven’t done so yet.
Well, if we can exclude all but one of the competing unlikely gods, the OP is such an example. A bounded utility function would lead to a decision to keep the money rather than send it to me.
Otherwise I don’t have one. I don’t expect to have one because I think that working with unbounded utility functions is intractible even if we can get it to be mathematically well-defined, since there are too many unlikely gods to enumerate.
But at this point I think I should retreat and reconsider. I want to read that paper by Hajek, and I want to understand the argument for bounded utility from Savage’s axioms, and I want to understand where having utilities that are surreal or hyperreal numbers fails to match those axioms. I found are a few papers about how to avoid paradoxes with unbounded utilities, too.
This has turned up lots of stuff that I want to pay attention to. Thanks for the pointers.
Sigh, we seem to be talking past each other. You’re talking about choosing which unlikely god jerks you around, and I’m trying to say that it’s eventually time to eat lunch. If you have infinite utilities, how can you ever justify prioritizing something finite and likely, like eating lunch, over something unlikely but infinite? Keeping a few dollars is like eating lunch, so if you can’t rationally decide to eat lunch, the question is which unlikely god you’ll give your money to. I agree that it probably won’t be me.
I think you mean Arguments for—and against—probabilism. If you meant something else, please correct me.
Why is eating lunch “finite”, given that we have the possibility of becoming gods ourselves, and eating lunch makes that possibility more likely (compared to not eating lunch)?
ETA: Suppose you saw convincing evidence that skipping lunch would make you more productive at FAI-building (say there’s an experiment showing that lunch makes people mentally slow in the afternoon), you would start skipping lunch, right? Even if you wouldn’t, would it be irrational for someone else to do so?
There are two issues here: 1) What the most plausible cashing-out of an unbounded utility function recommends 2) Whether that cashing-out is a sensible summary of someone’s values. I agree with you on 2) but think that you are giving bogus examples for 1). As with previous posts, if you concoct examples that have many independent things wrong with them, they don’t clearly condemn any particular component.
My understanding is that you want to say, with respect to 2), that you don’t want to act in accord with any such cashing-out, i.e. that your utility function is bounded insofar as you have one. Fine with me, I would say my own utility function is bounded too (although some of the things I assign finite utility to involve infinite amounts of stuff, e.g. I would prefer living forever to living 10,000 years, although boundedly so). Is that right?
But you also keep using what seem to be mistaken cashing-outs in response to 1). For instance, you say that:
But any decision theory/prior/utility function combination that gives in to Pascal’s Mugging will also recommend eating lunch (if you don’t eat lunch you will be hungry and have reduced probability of gaining your aims, whether infinite or finite). Can we agree on that?
If we can, then you should use examples where a bounded utility function and an unbounded utility function actually give conflicting recommendations about which action to take. As far as I can see, you haven’t done so yet.
I meant the paper that I already linked to earlier in this thread.
I agree that we agree on 2).
The conflict here seems to be that you’re trying to persist and do math after getting unbounded utilities, and I’m inclined to look at ridiculous inputs and outputs from the decision making procedure and say “See? It’s broken. Don’t do that!”. In this case the ridiculous input is a guess about the odds of me being god, and the ridiculous output is to send me money, or divert resources to some other slightly less unlikely god if I don’t win the contest.
Maybe. I don’t know what it would conclude about eating lunch. Maybe the decision would be to eat lunch, or maybe some unknown interaction of the guesses about the unlikely gods would lead to performing bizarre actions to satisfy whichever of them seemed more likely than the others. Maybe there’s a reason people don’t trust fanatics.
Well, if we can exclude all but one of the competing unlikely gods, the OP is such an example. A bounded utility function would lead to a decision to keep the money rather than send it to me.
Otherwise I don’t have one. I don’t expect to have one because I think that working with unbounded utility functions is intractible even if we can get it to be mathematically well-defined, since there are too many unlikely gods to enumerate.
But at this point I think I should retreat and reconsider. I want to read that paper by Hajek, and I want to understand the argument for bounded utility from Savage’s axioms, and I want to understand where having utilities that are surreal or hyperreal numbers fails to match those axioms. I found are a few papers about how to avoid paradoxes with unbounded utilities, too.
This has turned up lots of stuff that I want to pay attention to. Thanks for the pointers.
ETA: Readers may want to check my earlier comment pointing to a free substitute for the paywalled Hajek article.