Want to try answering my questions/problems about preference utilitarianism?
Maybe I would state my first question above a little differently today: Certain decision theories (such as the UDT/FDT/LDT family) already incorporate some preference-utilitarian-like intuitions, by suggesting that taking certain other agents’ preferences into account when making certain decisions is a good idea, if e.g. this is logically correlated with them taking your preferences into account. Does preference utilitarianism go beyond this, and say that you should take their preferences into account even if there is no decision theoretic reason to do so, as a matter of pure axiology (values / utility function)? Do you then take their preferences into account again as part of decision theory, or do you adopt a decision theory which denies or ignores such correlations/linkages/reciprocities (e.g., by judging them to be illusions or mistakes or some such)? Or does your preference utilitarianism do something else, like deny the division between decision theory and axiology? Also does your utility function contain non-preference-utilitarian elements, i.e., idiosyncratic preferences that aren’t about satisfying other agents’ preferences, and if so how do you choose the weights between your own preferences and other agents’?
(I guess this question/objection also applies to hedonic utilitarianism, to a somewhat lesser degree, because if a hedonic utilitarian comes across a hedonic egoist, he would also “double count” the latter’s hedons, once in his own utility function, and once again if his decision theory recommends taking the latter’s preferences into account. Another alternative that avoids this “double counting” is axiological egoism + some sort of advanced/cooperative decision theory, but then selfish values has its own problems. So my own position on is topic is one of high confusion and uncertainty.)
Want to try answering my questions/problems about preference utilitarianism?
Maybe I would state my first question above a little differently today: Certain decision theories (such as the UDT/FDT/LDT family) already incorporate some preference-utilitarian-like intuitions, by suggesting that taking certain other agents’ preferences into account when making certain decisions is a good idea, if e.g. this is logically correlated with them taking your preferences into account. Does preference utilitarianism go beyond this, and say that you should take their preferences into account even if there is no decision theoretic reason to do so, as a matter of pure axiology (values / utility function)? Do you then take their preferences into account again as part of decision theory, or do you adopt a decision theory which denies or ignores such correlations/linkages/reciprocities (e.g., by judging them to be illusions or mistakes or some such)? Or does your preference utilitarianism do something else, like deny the division between decision theory and axiology? Also does your utility function contain non-preference-utilitarian elements, i.e., idiosyncratic preferences that aren’t about satisfying other agents’ preferences, and if so how do you choose the weights between your own preferences and other agents’?
(I guess this question/objection also applies to hedonic utilitarianism, to a somewhat lesser degree, because if a hedonic utilitarian comes across a hedonic egoist, he would also “double count” the latter’s hedons, once in his own utility function, and once again if his decision theory recommends taking the latter’s preferences into account. Another alternative that avoids this “double counting” is axiological egoism + some sort of advanced/cooperative decision theory, but then selfish values has its own problems. So my own position on is topic is one of high confusion and uncertainty.)