and some sacrifice cognitive ability to other pleasures (BBJ+03)), and many turn their backs on high-powered careers.
What part of “expected utility maximizer” don’t you understand?
It’s a bit confusing to quote across a bracket boundary like that. The bit about sacrificing cognitive ability for other pleasures is an example of “they don’t often try to improve their rationality”, whereas turning backs on careers was about expected utility maximization.
I agree that turning your back on a high-powered career is not a good example of failing to maximize utility, but trading cognition for pleasure seems like a reasonable example of not valuing, or failing to act on the value of, being more rational.
trading cognition for pleasure seems like a reasonable example of not valuing, or failing to act on the value of, being more rational.
I think it’s the same thing as before. AI drives is about a particular set of behaviors being an instrumental value for a large subset of all plausible agents; rationality is one of these instrumental (and not terminal) drives.
Providing an instance where an agent trades off an instrumental good (rationality) for a terminal good (pleasure) is simply not a counter-example—what else would an agent do when offered such a tradeoff? It would be like saying “supposedly, people earn money so as to spend it on things they want; but look! they’re spending money on things like trips to Tahiti! Clearly that is not why they really earn money...”
Nitpick:
It’s a bit confusing to quote across a bracket boundary like that. The bit about sacrificing cognitive ability for other pleasures is an example of “they don’t often try to improve their rationality”, whereas turning backs on careers was about expected utility maximization.
I agree that turning your back on a high-powered career is not a good example of failing to maximize utility, but trading cognition for pleasure seems like a reasonable example of not valuing, or failing to act on the value of, being more rational.
I think it’s the same thing as before. AI drives is about a particular set of behaviors being an instrumental value for a large subset of all plausible agents; rationality is one of these instrumental (and not terminal) drives.
Providing an instance where an agent trades off an instrumental good (rationality) for a terminal good (pleasure) is simply not a counter-example—what else would an agent do when offered such a tradeoff? It would be like saying “supposedly, people earn money so as to spend it on things they want; but look! they’re spending money on things like trips to Tahiti! Clearly that is not why they really earn money...”