Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter’s dice are loaded, choosing 1A and 2B could well be perfectly rational.
...there were about 500 comments involving “Marshall”—and now they all appear to have been deleted—leaving a trail like this:
Did you delete your account there?
I don’t pay much attention to karma—but it is weird what gets voted up and down.
For a rationist community, people seem to go for conformity and “applause signs” much more than I would have expcted—while criticisms and disagreements seem to be punished more than I would have thought.
Anyway, interesting raw material for groupthink studies—some day.
Re: First, foremost, fundamentally, above all else: Rational agents should WIN.
When Deep Blue beat Gary Kasparov, did that prove that Gary Kasparov was “irrational”?
It seems as though it would be unreasonable to expect even highly rational agents to win—if pitted against superior competition. Rational agents can lose in other ways as well—e.g. by not having access to useful information.
Since there are plenty of ways in which rational agents can lose, “winning” seems unlikely to be part of a reasonable definition of rationality.
But what good reason is there not to? How can you be worse off from knowing in advance what you’ll do in the worse cases?
The answer seems trivial: you may have wasted a bunch of time and energy performing calculations relating to what to do in a hypothetical situation that you might never face.
If the calculations can be performed later, then that will often be better—since then more information will be available—and possibly the calculations may not have to be performed at all.
Calculating in advance can be good—if you fear that you may not have time to calculate later—or (obviously) if the calculations affect the choices to be taken now. However, the act of performing calculations has associated time and energy costs—so it is best to use your “calculating” time wisely.
For the same reason that when you’re buying a stock you think will go up, you decide how far it has to decline before it means you were wrong
Do any investors actually do that? I don’t mean to be rude—but why haven’t they got better things to do with their time?
I didn’t find “Engines” very positive. I agree with Moravec:
“I found the speculations absurdly anthropocentric. Here we have machines millions of times more intelligent, plentiful, fecund, and industrious than ourselves, evolving and planning circles around us. And every single one exists only to support us in luxury in our ponderous, glacial, antique bodies and dim witted minds. There is no hint in Drexler’s discussion of the potential lost by keeping our creations so totally enslaved.”
IMO, Drexler’s proposed future is an unlikely nightmare world.
Anon, you are arguing for “incorrect”, not “cynical”. Please consider the difference.
Like it or not, biologists are basically correct in identifying the primary goal of organisms as self-reproduction. That is the nature of the attractor to which all organisms’ goal systems are drawn (though see also this essay of mine). Yes, some organisms break, and other organisms find themselves in unfamiliar environments—but if anything can be said to be the goal of organisms, then that is it. The exceptions (like your contraceptives) just prove the rule. Such organisms are acting in a way that is intended to promote their genetic fitness. It is just that some of their assumptions about the environment might be wrong. Alas, contraceptives are not a very good example, because they prevent disease, make sex easier (thus helping to create pair bonds), and have other positive effects.
Organisms tend to act as though their number one motive is self-reproduction. Philosophers may be able to debate whether that motive is “explicitly represented in their brains”—but if it looks like a duck and quacks like a duck, whether philosophers are prepared to call it a duck seems like a side issue.
It is the same as with Deep Blue. Deep Blue acts as though its number one motive is to win games of chess (thus inflating IBM’s stock price). That is the single most helpful simple way in which to understand its behaviour. If you actually look at its utility function, it has thousands of elements, not one of which refers to winning games of chess—but so what? It is not “cynical” to treat Deep Blue as trying to win games of chess. That is what it is doing!
Consider the hash that some people make of evolutionary psychology in trying to be cynical—assuming that humans have a subconscious motive to promote their inclusive genetic fitness.
What is “cynical” about that? It is a central organising principle in biology that organisms tend to act in such a way to promote their own inclusive genetic fitness. There are a few caveats—but why would viewing people like that be “cynical”? I do not see anything wrong with promoting your own genetic fitness—rather it seems like a perfectly natural thing to do to me.
Looking at the population explosion, I would say that the world appears to be full of people who are acting in a manner that is highly effective at promoting their own genetic fitness. They are doing something wrong? What makes you think that?
Re: The parental grief is not even subconsciously about reproductive value—otherwise it would update for Canadian reproductive value instead of !Kung reproductive value.
I think that a better way to put this would be to say that the Canadian humans miscalculate reproductive value—using subconscious math more appropriate for bushmen.
If you want to look at the the importance of reproductive value represented by children to humans, the most obvious studies to look at are the ones that deal with adopted kids—comparing them with more typical ones. For example look at the statistics about how much such kids get beaten, suffer from child abuse, die or commit suicide.
Re: Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake [...]
Except where paternity suits are involved, presumably.
[Tim, you post this comment every time I talk about evolutionary psychology, and it’s the same comment every time, and it doesn’t add anything new on each new occasion. If these were standard theories I could forgive it, but not considering that they’re your own personal versions. I’ve already asked you to stop. --EY]
Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.
One big problem is that they tend to systematically ignore memes.
Human brains are parasitised by replicators that hijack them for their own ends. The behaviour of a catholic priest has relatively little to do with the inclusive genetic fitness of the priest—and a lot to do with the inclusive genetic fitness of the Catholicism meme. Pinker and many of the other evo-psych guys still show little sign of “getting” this.
Wasn’t there some material in CFAI about solving the wirehead problem?
The analogy between the theory that humans behave like expected utility maximisers—and the theory that atoms behave like billiard balls could be criticised—but it generally seems quite appropriate to me.
In dealing with your example, I didn’t “change the space of states or choices”. All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them—according to the maximum value of the utility function I specified.
The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Such an agent would drive in circles—but that is not necessarily irrational behaviour.
Of course much of the value of expected utility theory arises when you use short and simple utility functions—however, if you are prepared to use more complex utility functions, there really are very few limits on what behaviours can be represented.
The possibility of using complex utility functions does not in any way negate the value of the theory for providing a model of rational economic behaviour. In economics, the utility function is pretty fixed: maximise profit, with specified risk aversion and future discounting. That specifies an ideal which real economic agents approximate. Plugging in an arbitrary utility function is simply an illegal operation in that context.
The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.
The utility function of Deep Blue has 8,000 parts—and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it’s aim is to win games of chess. The exact details of the information in the original utility function are not recovered—but the eventual functional outcome would be much the same—a powerful chess computer.
The “targeting information” is actually a bunch of implementation details that can be effectively recreated from the goal—if that should prove to be necessary.
It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue’s utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details—not the real target.
I note that filial cannibalism is quite common on this planet.
Gamete selection has quite a few problems. It only operates on half the genome at a time—and selection is performed before many of the genes can be expressed. Of course gamete selection is cheap.
What spiders do—i.e. produce lots of offspring, and have many die as infants—has a huge number of evolutionary benefits. The lost babies do not cost very much, and the value of the selection that acts on them is great.
Human beings can’t get easily get there—since they currently rely on gestation inside a human female body for nine months, but—make no mistake—if we could produce lots of young, and kill most of them at a young age, then that would be a vastly superior system in terms of the quantity and quality of the resulting selection.
Human females do abort quite a few foetuses after a month or so—ones that fail internal and maternal integrity tests—but the whole system is obviously appalingly inefficient.
I think Eliezer is due for congratulation here. This series is nothing short of a mammoth intellectual achievement [...]
It seems like an odd place for congratulations—since the conclusion here seems to be about 180 degrees out of whack—and hardly anyone seems to agree with it. I asked how one of the ideas here was remotely defensible. So far, there have been no takers.
If there is not even a debate, whoever is incorrect on this topic would seem to be in danger of failing to update. Of course personally, I think it is Eliezer who needs to update. I have quite a bit in common with Eliezer—and I’d like to be on the same page as him—but it is difficult to do when he insists on defending positions that I regard as poorly-conceived.
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.
That is silly—the associated utility function is the one you have just explicitly given. To rephrase:
if (senses contain (A,B)) selecting A has high utility; else
if (senses contain (B,C)) selecting B has high utility; else
if (senses contain (C,A)) selecting C has high utility;
Here’s another example: When given (A,B) a program outputs “indifferent”. When given (equal chance of A or B, A, B) it outputs “equal chance of A or B”. This is also not allowed by EU maximization.
Again, you have just given the utility function by describing it. As for “indifference” being a problem for a maximisation algorithm—it really isn’t in the context of decision theory. An agent either takes some positive action, or it doesn’t. Indifference is usually modelled as lazyness—i.e. a preference for taking the path of least action.