For strategies: This ties back in to the situation where there’s an observable event X that you can condition your strategy on, and the strategy space has a product structure S=SX×S¬X. This product structure seems important, since you should generally expect utility functions u to factor in the sense that u(s,t)=quX(s)+(1−q)u¬X(t) for some functions uX and u¬X, where q is the probability of X (I think for the relevance section, you want to assume that whenever there is such a product structure, p is supported on utility functions that factor, and you can define conditional utility for such functions). Arbitrary permutations of S that do not preserve the product structure don’t seem like true symmetries, and I don’t think it should be expected that an aggregation rule should be invariant under them. In the real world, there are many observations that people can and do take into account when deciding what to do, so a good model of strategy-space should have a very rich structure.
For outcomes, which is what utility functions should be defined on anyway: Outcomes differ in terms of how achievable they are. I have an intuition that if an outcome is impossible, then removing it from the model shouldn’t have much effect. Like, you shouldn’t be able to rig the aggregator function in favor of moral theory 1 as opposed to moral theory 2 by having the model take into account all the possible outcomes that could realistically be achieved, and also a bunch of impossible outcomes that theory 2 thinks are either really good or really bad, and theory 1 thinks are close to neutral. A natural counter-argument is that before you know which outcomes are impossible, any Pareto-optimal way of aggregating your possible preference functions must not change based on what turns out to be achievable; I’ll have to think about that more. Also, approximate symmetries between peoples’ preferences seem relevant to interpersonal utility comparison in practice, in the sense that two peoples’ preferences tend to look fairly similar to each other in structure, but with each person’s utility function centered largely around what happens to themselves instead of the other person, and this seems to help us make comparisons of the form “the difference between outcomes 1 and 2 is more important for person A than for person B”; I’m not sure if this way of describing it is making sense.
Ok, I chose the picture proof because it was a particularly simple example of symmetry. What kind of internal structure are you thinking of?
For strategies: This ties back in to the situation where there’s an observable event X that you can condition your strategy on, and the strategy space has a product structure S=SX×S¬X. This product structure seems important, since you should generally expect utility functions u to factor in the sense that u(s,t)=quX(s)+(1−q)u¬X(t) for some functions uX and u¬X, where q is the probability of X (I think for the relevance section, you want to assume that whenever there is such a product structure, p is supported on utility functions that factor, and you can define conditional utility for such functions). Arbitrary permutations of S that do not preserve the product structure don’t seem like true symmetries, and I don’t think it should be expected that an aggregation rule should be invariant under them. In the real world, there are many observations that people can and do take into account when deciding what to do, so a good model of strategy-space should have a very rich structure.
For outcomes, which is what utility functions should be defined on anyway: Outcomes differ in terms of how achievable they are. I have an intuition that if an outcome is impossible, then removing it from the model shouldn’t have much effect. Like, you shouldn’t be able to rig the aggregator function in favor of moral theory 1 as opposed to moral theory 2 by having the model take into account all the possible outcomes that could realistically be achieved, and also a bunch of impossible outcomes that theory 2 thinks are either really good or really bad, and theory 1 thinks are close to neutral. A natural counter-argument is that before you know which outcomes are impossible, any Pareto-optimal way of aggregating your possible preference functions must not change based on what turns out to be achievable; I’ll have to think about that more. Also, approximate symmetries between peoples’ preferences seem relevant to interpersonal utility comparison in practice, in the sense that two peoples’ preferences tend to look fairly similar to each other in structure, but with each person’s utility function centered largely around what happens to themselves instead of the other person, and this seems to help us make comparisons of the form “the difference between outcomes 1 and 2 is more important for person A than for person B”; I’m not sure if this way of describing it is making sense.
OK, got a better formalism: https://agentfoundations.org/item?id=1449
I think I’ve got something that works; I’ll post it tomorrow.