It’s not obvious to me that Qiaochu would endorse utility functions as a standard for “ideal rationality”. I, for one, do not.
Talking about utility functions can be useful if one believes any of the following about ideal rationality, as a concrete example of what one means if nothing else.
An ideally rational agent uses one of the standard decision theories (vNM, EDT, CDT, etc.)
An ideally rational agent does EU maximization.
An ideally rational agent is consequentialist.
An ideally rational agent, when evaluating the consequences of its actions, divides up the domain of evaluation into two or more parts, evaluates them separately, and then adds their values together. (For example, for an EU maximizer, the “parts” are possible outcomes or possible world-histories. For a utilitarian, the “parts” are individual persons within each world.)
An ideally rational agent has values/preferences that are (or can be) represented by a clearly defined mathematical object.
I guess when you say you don’t “endorse utility functions” you mean that you don’t endorse 1 or 2. Do you endorse any of the others, and if so what would you use instead of utility functions to illustrate what you mean?
It’s hard for me to know that 4 and 5 really mean since they are so abstract. I definitely don’t endorse 1 or 2 and I’m pretty sure I don’t endorse 4 either (integrating over uncertainty in what you meant). I’m uncertain about 3; it seems plausible but far from clear. I’m certainly not consequentialist and don’t want to be, but maybe I would want to be in some utopian future. Again, I’m not really sure what you mean by 5, it seems almost tautological since everything is a mathematical object.
Talking about utility functions can be useful if one believes any of the following about ideal rationality, as a concrete example of what one means if nothing else.
An ideally rational agent uses one of the standard decision theories (vNM, EDT, CDT, etc.)
An ideally rational agent does EU maximization.
An ideally rational agent is consequentialist.
An ideally rational agent, when evaluating the consequences of its actions, divides up the domain of evaluation into two or more parts, evaluates them separately, and then adds their values together. (For example, for an EU maximizer, the “parts” are possible outcomes or possible world-histories. For a utilitarian, the “parts” are individual persons within each world.)
An ideally rational agent has values/preferences that are (or can be) represented by a clearly defined mathematical object.
I guess when you say you don’t “endorse utility functions” you mean that you don’t endorse 1 or 2. Do you endorse any of the others, and if so what would you use instead of utility functions to illustrate what you mean?
It’s hard for me to know that 4 and 5 really mean since they are so abstract. I definitely don’t endorse 1 or 2 and I’m pretty sure I don’t endorse 4 either (integrating over uncertainty in what you meant). I’m uncertain about 3; it seems plausible but far from clear. I’m certainly not consequentialist and don’t want to be, but maybe I would want to be in some utopian future. Again, I’m not really sure what you mean by 5, it seems almost tautological since everything is a mathematical object.