This does not fit the evidence, however. Humans are certainly not expected utility maximisers (probably the closest would be financial traders who try to approximate expected money maximisers, but only in their professional work),
Huh? Omohundro’s thesis is not ‘humans are expected-dollar maximizers’. (And pointing this out is not adopting an improbable convoluted utility function.)
they don’t often try to improve their rationality (in fact some specifically avoid doing so (many examples of this are religious, such as the Puritan John Cotton who wrote ‘the more learned and witty you bee, the more fit to act for Satan will you bee’(Hof62)),
Falls under the ‘weird’ criteria, no? These people are espousing a defensive tactic (more education correlates with less religiosity) by a socially-communicated meme; this is a weird self-sustaining belief which has had to gradually evolve new tactics over many millennia and probably stems from peculiar properties of evolved human consciousness re agent detection.
and some sacrifice cognitive ability to other pleasures (BBJ+03)), and many turn their backs on high-powered careers.
What part of “expected utility maximizer” don’t you understand?
Some humans do desire self-improvement (in the sense of the paper), and Omohundro cites this as evidence for his thesis. Some humans don’t desire it, though, and this should be taken as contrary evidence (or as evidence that Omohundro’s model of what constitutes self-improvement is overly narrow).
Or it reflects utility-maximizing behavior under the constraints that humans—but not pretty much any AI—face: eg.
the high opportunity costs of learning (I’ve read lifetime income is maximized at the master’s degree level—because PhDs take too much time!)
the limited lifespan of humans
the even more limited productive lifespan of a human (consider the decay of intelligence with age by age 40 or 50, and the simultaneous sharp decline in scientific achievement observed in Jones’s samples)
and the high discount rates of almost everyone (rarely less than 5%, often double-digits)
and some sacrifice cognitive ability to other pleasures (BBJ+03)), and many turn their backs on high-powered careers.
What part of “expected utility maximizer” don’t you understand?
It’s a bit confusing to quote across a bracket boundary like that. The bit about sacrificing cognitive ability for other pleasures is an example of “they don’t often try to improve their rationality”, whereas turning backs on careers was about expected utility maximization.
I agree that turning your back on a high-powered career is not a good example of failing to maximize utility, but trading cognition for pleasure seems like a reasonable example of not valuing, or failing to act on the value of, being more rational.
trading cognition for pleasure seems like a reasonable example of not valuing, or failing to act on the value of, being more rational.
I think it’s the same thing as before. AI drives is about a particular set of behaviors being an instrumental value for a large subset of all plausible agents; rationality is one of these instrumental (and not terminal) drives.
Providing an instance where an agent trades off an instrumental good (rationality) for a terminal good (pleasure) is simply not a counter-example—what else would an agent do when offered such a tradeoff? It would be like saying “supposedly, people earn money so as to spend it on things they want; but look! they’re spending money on things like trips to Tahiti! Clearly that is not why they really earn money...”
Huh? Omohundro’s thesis is not ‘humans are expected-dollar maximizers’. (And pointing this out is not adopting an improbable convoluted utility function.)
Falls under the ‘weird’ criteria, no? These people are espousing a defensive tactic (more education correlates with less religiosity) by a socially-communicated meme; this is a weird self-sustaining belief which has had to gradually evolve new tactics over many millennia and probably stems from peculiar properties of evolved human consciousness re agent detection.
What part of “expected utility maximizer” don’t you understand?
Or it reflects utility-maximizing behavior under the constraints that humans—but not pretty much any AI—face: eg.
the high opportunity costs of learning (I’ve read lifetime income is maximized at the master’s degree level—because PhDs take too much time!)
the limited lifespan of humans
the even more limited productive lifespan of a human (consider the decay of intelligence with age by age 40 or 50, and the simultaneous sharp decline in scientific achievement observed in Jones’s samples)
and the high discount rates of almost everyone (rarely less than 5%, often double-digits)
Nitpick:
It’s a bit confusing to quote across a bracket boundary like that. The bit about sacrificing cognitive ability for other pleasures is an example of “they don’t often try to improve their rationality”, whereas turning backs on careers was about expected utility maximization.
I agree that turning your back on a high-powered career is not a good example of failing to maximize utility, but trading cognition for pleasure seems like a reasonable example of not valuing, or failing to act on the value of, being more rational.
I think it’s the same thing as before. AI drives is about a particular set of behaviors being an instrumental value for a large subset of all plausible agents; rationality is one of these instrumental (and not terminal) drives.
Providing an instance where an agent trades off an instrumental good (rationality) for a terminal good (pleasure) is simply not a counter-example—what else would an agent do when offered such a tradeoff? It would be like saying “supposedly, people earn money so as to spend it on things they want; but look! they’re spending money on things like trips to Tahiti! Clearly that is not why they really earn money...”