Another is for them to be average utilitarians who use weighted averages instead of unweighted averages
Not sure I get this; the weighted average of 1 and 1, for all weights, is 1. Their sum is 2. Therefore these weighted avereaging agents cannot be total utilitarians.
The point you made in your first comment in this series is relevant. I’ve strengthened the conditions of the isomorphism axiom in the post to say “same setup”, which basically means same possible worlds with same numbers of people in them.
Not sure I get this; the weighted average of 1 and 1, for all weights, is 1.
So in the heads world, the average utility is -x. In the tails world, the average utility is 1-x. An unweighted average means that the decision maker goes “and so the utility I evaluate for buying at x is (1-2x)/2”. A weighted average means the decision maker goes “and so the utility I evaluate for buying at x is (2-3x)/2″.
For an example of a decision maker who takes a weighted average, take the selfish agents in my poorly-modified non-anthropic problem. They multiply the payoff and the probability of the “world coming into existence” (the coin landing heads) to get the payoff to a decider in that world, but weight the average by the frequency with which they’re a decider.
Not sure I get this; the weighted average of 1 and 1, for all weights, is 1. Their sum is 2. Therefore these weighted avereaging agents cannot be total utilitarians.
The point you made in your first comment in this series is relevant. I’ve strengthened the conditions of the isomorphism axiom in the post to say “same setup”, which basically means same possible worlds with same numbers of people in them.
So in the heads world, the average utility is -x. In the tails world, the average utility is 1-x. An unweighted average means that the decision maker goes “and so the utility I evaluate for buying at x is (1-2x)/2”. A weighted average means the decision maker goes “and so the utility I evaluate for buying at x is (2-3x)/2″.
For an example of a decision maker who takes a weighted average, take the selfish agents in my poorly-modified non-anthropic problem. They multiply the payoff and the probability of the “world coming into existence” (the coin landing heads) to get the payoff to a decider in that world, but weight the average by the frequency with which they’re a decider.