Bounded utility functions effectively give “bounded probability functions,” in the sense that you (more or less) stop caring about things with very low probability.
For example, if my maximum utility is 1,000, then my maximum utility for something with a probability of one in a billion is .0000001, an extremely small utiliity, so something that I will care about very little. The probability of of the 3^^^3 scenarios may be more than one in 3^^^3. But it will still be small enough that a bounded utility function won’t care about situations like that, at least not to any significant extent.
That is precisely the reason that it will do the things you object to, if that situation comes up.
That is no different from pointing out that the post’s proposal will reject a “mugging” even when it will actually cost 3^^^3 lives.
Both proposals have that particular downside. That is not something peculiar to mine.
Bounded utility functions mean you stop caring about things with very high utility. That you care less about certain low probability events is just a side effect. But those events can also have very high probability and you still don’t care.
If you want to just stop caring about really low probability events, why not just do that?
I just explained. There is no situation involving 3^^^3 people which will ever have a high probability. Telling me I need to adopt a utility function which will handle such situations well is trying to mug me, because such situations will never come up.
Also, I don’t care about the difference between 3^^^^^3 people and 3^^^^^^3 people even if the probability is 100%, and neither does anyone else. So it isn’t true that I just want to stop caring about low probability events. My utility is actually bounded. That’s why I suggest using a bounded utility function, like everyone else does.
There is no situation involving 3^^^3 people which will ever have a high probability.
Really? No situation? Not even if we discover new laws of physics that allow us to have infinite computing power?
Telling me I need to adopt a utility function which will handle such situations well is trying to mug me, because such situations will never come up.
We are talking about utility functions. Probability is irrelevant. All that matters for the utility function is that if the situation came up, you would care about it.
Also, I don’t care about the difference between 3^^^^^3 people and 3^^^^^^3 people even if the probability is 100%, and neither does anyone else.
I totally disagree with you. These numbers are so incomprehensibly huge you can’t picture them in your head, sure. There is massive scope insensitivity. But if you had to make moral choices that affect those two numbers of people, you should always value the bigger number proportionally more.
E.g. if you had to torture 3^^^^^3 to save 3^^^^^^3 from getting dust specks in their eyes. Or make bets involving probabilities between various things happening to the different groups. Etc. I don’t think you can make these decisions correctly if you have a bounded utility function.
If you don’t make them correctly, well that 3^^^3 people probably contains a basically infinite number of copies of you. By making the correct tradeoffs, you maximize the probability that the other versions of yoruself find themselves in a universe with higher utility.
Bounded utility functions effectively give “bounded probability functions,” in the sense that you (more or less) stop caring about things with very low probability.
For example, if my maximum utility is 1,000, then my maximum utility for something with a probability of one in a billion is .0000001, an extremely small utiliity, so something that I will care about very little. The probability of of the 3^^^3 scenarios may be more than one in 3^^^3. But it will still be small enough that a bounded utility function won’t care about situations like that, at least not to any significant extent.
That is precisely the reason that it will do the things you object to, if that situation comes up.
That is no different from pointing out that the post’s proposal will reject a “mugging” even when it will actually cost 3^^^3 lives.
Both proposals have that particular downside. That is not something peculiar to mine.
Bounded utility functions mean you stop caring about things with very high utility. That you care less about certain low probability events is just a side effect. But those events can also have very high probability and you still don’t care.
If you want to just stop caring about really low probability events, why not just do that?
I just explained. There is no situation involving 3^^^3 people which will ever have a high probability. Telling me I need to adopt a utility function which will handle such situations well is trying to mug me, because such situations will never come up.
Also, I don’t care about the difference between 3^^^^^3 people and 3^^^^^^3 people even if the probability is 100%, and neither does anyone else. So it isn’t true that I just want to stop caring about low probability events. My utility is actually bounded. That’s why I suggest using a bounded utility function, like everyone else does.
Really? No situation? Not even if we discover new laws of physics that allow us to have infinite computing power?
We are talking about utility functions. Probability is irrelevant. All that matters for the utility function is that if the situation came up, you would care about it.
I totally disagree with you. These numbers are so incomprehensibly huge you can’t picture them in your head, sure. There is massive scope insensitivity. But if you had to make moral choices that affect those two numbers of people, you should always value the bigger number proportionally more.
E.g. if you had to torture 3^^^^^3 to save 3^^^^^^3 from getting dust specks in their eyes. Or make bets involving probabilities between various things happening to the different groups. Etc. I don’t think you can make these decisions correctly if you have a bounded utility function.
If you don’t make them correctly, well that 3^^^3 people probably contains a basically infinite number of copies of you. By making the correct tradeoffs, you maximize the probability that the other versions of yoruself find themselves in a universe with higher utility.