Imagine 3 copies in 3 different worlds, one heads world and two tails worlds
What do you mean by that? That there is a second coin toss after the first one if tails comes up? And changing that label to world very much changes what an average utilitarian would care about—if I average utility over the number of people in the world, then who is and isn’t in that world is very important.
Well, yeah, it’s important. That’s the point. It’s important but it’s not one of the criteria used for isomorphism.
As for how to set the problem in 3 worlds, there are a variety of ways that preserve probabilities (or multiply them by an overall factor, which won’t change decisions). For example, you could do that second coin toss. An example set of worlds would be HH − 1 person, HT − 0 people, TH − 1 person, TT − 1 person.
Of course, if you don’t want to mess with worlds, an equivalent effect would come from changing whether or not the total expected utility is a weighted or unweighted average over worlds, which is also not one of the criteria for isomorphism.
Of course, if you don’t want to mess with worlds, an equivalent effect would come from changing whether or not the total expected utility is a weighted or unweighted average over worlds, which is also not one of the criteria for isomorphism.
You seem to be arguing that total utilitarians must reach the same conclusions as average utilitarians, which seems tremendously overstrong to be a useful requirement.
The isomorphism axiom says that selfish and average utilitarian agents should make the same decisions.
The total vs average utilitarians comparison fails the “same utility outcomes for each possible linked decision”. Total utllitarians get more utility than average utilitarians in the tails world (and the same utility in the heads world), so they do not get the same utility outcomes.
There are two different ways of having decision-makers act like total utilitarians. One is for them to add the utilities up and then take the unweighted average. Another is for them to be average utilitarians who use weighted averages instead of unweighted averages. The first is not “isomorphic” to average utilitarianism, but the second one is.
The difference between ordinary average utilitarians and these average/total utilitarians is not (at least not explicitly) in the possible decisions, the probabilities, or the utilities. It is the in the algorithm they run, which takes the aforementioned things as inputs and spits out a decision. I’m reminded of this comment of yours.
Another is for them to be average utilitarians who use weighted averages instead of unweighted averages
Not sure I get this; the weighted average of 1 and 1, for all weights, is 1. Their sum is 2. Therefore these weighted avereaging agents cannot be total utilitarians.
The point you made in your first comment in this series is relevant. I’ve strengthened the conditions of the isomorphism axiom in the post to say “same setup”, which basically means same possible worlds with same numbers of people in them.
Not sure I get this; the weighted average of 1 and 1, for all weights, is 1.
So in the heads world, the average utility is -x. In the tails world, the average utility is 1-x. An unweighted average means that the decision maker goes “and so the utility I evaluate for buying at x is (1-2x)/2”. A weighted average means the decision maker goes “and so the utility I evaluate for buying at x is (2-3x)/2″.
For an example of a decision maker who takes a weighted average, take the selfish agents in my poorly-modified non-anthropic problem. They multiply the payoff and the probability of the “world coming into existence” (the coin landing heads) to get the payoff to a decider in that world, but weight the average by the frequency with which they’re a decider.
What do you mean by that? That there is a second coin toss after the first one if tails comes up? And changing that label to world very much changes what an average utilitarian would care about—if I average utility over the number of people in the world, then who is and isn’t in that world is very important.
Well, yeah, it’s important. That’s the point. It’s important but it’s not one of the criteria used for isomorphism.
As for how to set the problem in 3 worlds, there are a variety of ways that preserve probabilities (or multiply them by an overall factor, which won’t change decisions). For example, you could do that second coin toss. An example set of worlds would be HH − 1 person, HT − 0 people, TH − 1 person, TT − 1 person.
Of course, if you don’t want to mess with worlds, an equivalent effect would come from changing whether or not the total expected utility is a weighted or unweighted average over worlds, which is also not one of the criteria for isomorphism.
You seem to be arguing that total utilitarians must reach the same conclusions as average utilitarians, which seems tremendously overstrong to be a useful requirement.
Rather, I’m arguing the reverse—I’m saying that since that’s demonstrably false, the isomorphism axiom is false as stated.
The isomorphism axiom says that selfish and average utilitarian agents should make the same decisions.
The total vs average utilitarians comparison fails the “same utility outcomes for each possible linked decision”. Total utllitarians get more utility than average utilitarians in the tails world (and the same utility in the heads world), so they do not get the same utility outcomes.
There are two different ways of having decision-makers act like total utilitarians. One is for them to add the utilities up and then take the unweighted average. Another is for them to be average utilitarians who use weighted averages instead of unweighted averages. The first is not “isomorphic” to average utilitarianism, but the second one is.
The difference between ordinary average utilitarians and these average/total utilitarians is not (at least not explicitly) in the possible decisions, the probabilities, or the utilities. It is the in the algorithm they run, which takes the aforementioned things as inputs and spits out a decision. I’m reminded of this comment of yours.
Not sure I get this; the weighted average of 1 and 1, for all weights, is 1. Their sum is 2. Therefore these weighted avereaging agents cannot be total utilitarians.
The point you made in your first comment in this series is relevant. I’ve strengthened the conditions of the isomorphism axiom in the post to say “same setup”, which basically means same possible worlds with same numbers of people in them.
So in the heads world, the average utility is -x. In the tails world, the average utility is 1-x. An unweighted average means that the decision maker goes “and so the utility I evaluate for buying at x is (1-2x)/2”. A weighted average means the decision maker goes “and so the utility I evaluate for buying at x is (2-3x)/2″.
For an example of a decision maker who takes a weighted average, take the selfish agents in my poorly-modified non-anthropic problem. They multiply the payoff and the probability of the “world coming into existence” (the coin landing heads) to get the payoff to a decider in that world, but weight the average by the frequency with which they’re a decider.