Suppose individuals have several incommensurable utility functions: would this present a problem for decision theory? If you were presented with Newcomb’s problem, but were at the same time worried about accepting money you didn’t earn, would these sorts of considerations have to be incorporated into a single algorithm?
If not, how do we understand such ethical concerns as being involved in decisions? If so, how do we incorporate such concerns?
Suppose individuals have several incommensurable utility functions: would this present a problem for decision theory? If you were presented with Newcomb’s problem, but were at the same time worried about accepting money you didn’t earn, would these sorts of considerations have to be incorporated into a single algorithm?
If not, how do we understand such ethical concerns as being involved in decisions? If so, how do we incorporate such concerns?