For example, suppose Mary currently values her own pleasure and nothing else, but that were she exposed to certain arguments she would come to value everyone’s pleasure (in particular, the sum of everyone’s pleasure) and that no other arguments would ever lead her to value anything else. This is obviously unrealistic, but I’m trying to determine what you mean via a simple example. Would Q_Mary be ‘What maximizes Mary’s pleasure?’ or ‘What maximizes the sum of pleasure?’ or would it be something else?
The question gets especially interesting if we imagine that Mary’s sensitivity to moral arguments is due to a bug in her brain, and otherwise she’s completely on board with her original selfish desires.
This comment by Toby Ord is nice:
The question gets especially interesting if we imagine that Mary’s sensitivity to moral arguments is due to a bug in her brain, and otherwise she’s completely on board with her original selfish desires.
Well, the trouble is that we’re not just not utility-maximizers, we’re not even stable under small perturbations :)