I feel like it’s not even clear in that prototypical example. Utility isn’t directly transferrable. While you can argue for (cooperate, cooperate) whether the utility is measured in (one human, one hundred paperclips) or (one hundred humans, one paperclip), if both of those situations come up, (cooperate, cooperate) on both is not Pareto efficient. I’m thinking it might be good to normalize using the a priori probability of it coming up. For example, if you have a 50% chance of each of the above possibilities coming up, and you know only one will happen, the obvious thing to do is to maximize humans if you get the option with more humans, and paperclips if you get the one with more paperclips. It’s what you’d do if you were going to do both once.
I feel like it’s not even clear in that prototypical example. Utility isn’t directly transferrable. While you can argue for (cooperate, cooperate) whether the utility is measured in (one human, one hundred paperclips) or (one hundred humans, one paperclip), if both of those situations come up, (cooperate, cooperate) on both is not Pareto efficient. I’m thinking it might be good to normalize using the a priori probability of it coming up. For example, if you have a 50% chance of each of the above possibilities coming up, and you know only one will happen, the obvious thing to do is to maximize humans if you get the option with more humans, and paperclips if you get the one with more paperclips. It’s what you’d do if you were going to do both once.