Personal website: https://andrewtmckenzie.com/
Andy_McKenzie
I like this post. By the way, another argument that people always get into over the definition of a word is sports. Is nascar a sport? Is figure skating?
Also, in learning theory, how to define a reflex is of big debate. Is jealousy a reflex? Can you think of any reason to care? Seriously, I’m wondering.
Soooo jealous, haha. I’ll be at college, so I hope you do another meet up over the summer. You have to try to cater to your overcoming bias readers who are wasting their money on higher education!
Good post. I don’t think that #18,
“Create value so that you can capture it, but don’t feel obligated to capture all the value you create. If you capture all your value, your transactions benefit only yourself, and others have no motive to participate. If you have negotiating leverage, use it to drive a good bargain for yourself, but not a hateful one—someday you’ll be on the other side of the table.”
is a real capitalist value. I think that the capitalist value is to strike the best deal that you can, no matter what. In game theoretic terms, there’s no reason to assume repeated play, or that one day you will be on the opposite side of the table. You should encourage the people you are transaction with to want to come to the table again, true, but I don’t see why the word “hateful” has to enter the conversation.
If somebody was planning to destroy the world, the rationalist could stop him and not break his oath of honesty by simply killing the psychopath. Then if the rationalist were caught and arrested and still didn’t reveal why he had committed murder, perhaps even being condemned to death for the act but never breaking his oath of honesty, now that would make an awesome movie.
Eliezer, one qualm: You consistently bring up mirror neurons and consider it to be obvious prima facie that they are used for action understanding in humans. Unfortunately, most contemporary neuroscientists in the field agree that there is no consistent evidence of this:
http://talkingbrains.blogspot.com/2008/08/eight-problems-for-mirror-neuron-theory.html
That is not to say that humans don’t understand other people’s actions or that we do not have adequate theory of minds! But it does mean that there is no reason to suspect that those complicated cognitive events can be reduced to simply a group of “mirror” neurons. Ramachandran often mentions them as well, which irks me slightly as well.
Jack: The idea of having citations everywhere is nice but unpragmatic. It would slow down conversation and dialogue tremendously.
One possible alternative is to have nested dialogues. Each sentence that makes some sort of claim links to another which explains the idea more thoroughly if that is what you disagree with. If you do not disagree with that point, then you can continue reading the main chain. This is similar to the idea of hypertext dialogue: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.3246 , and it is similar to what Eliezer has done at OB by being so self-referential.
Number 2 seems to be the primary roadblock to this comparison. You say that rationalists should have appropriate feedback systems set up, but it would be very difficult to approach the kind of feedback necessary (seconds) to achieve flow when making decisions. Until we have some sort of brain-computer interface, I doubt that feedback at a fast enough pace would be possible to feel “flow” when making rational decisions.
Interesting idea. If this was indeed a scenario that presented itself often in the Pleistocene, then we should expect individuals to signal that they do not do well under pressure (and in order to enable self-deception, actually believe that they will not do well), but consistently perform better under pressure than they expect to. There are cultural desires that shape our avowed expectations under many scenarios, so perhaps the best empirical test of this would be under a novel game situation.
If true, perhaps this could explain the pervasiveness of self-deprecating behavior?
Society tells you to work to make yourself more valuable. Then it tells you that when you reason morally, you must assume that all lives are equally valuable.
Nitpicking: I don’t think this is a good way of framing the issue. “Society” doesn’t tell you to do anything. There are societal structures in place that reward certain actions, but you are not told to do anything one way or another. I only mention this because you are not the first to do so.
As far as your ethics are concerned, you are assuming that a rationalist will be able to deduce the best possible action at the outset of his life, instead of experimenting with various strategies and updating your beliefs. In a probabilistic environment, reward matching is the best strategy.
I agree with this point generally but it is difficult to find specific examples because they will be heavily context-dependent. Ratiocinative is one probably underused word, as is lucid.
Non-fiction but the book that I wish I had read earlier is Engines of Creation by Eric Drexler. The most optimistic account of the future I have ever read. If you are ever feeling down, seriously down, you should read that book before doing anything drastic.
I’ve heard this story that we have to teach things to children at a young age in order for them to fully embrace it before, but is there any evidence of this actually happening? Moreover, what’s wrong with people opting into being rational?
Excellent post. Having just read The Adapted Mind (and earlier the moral animal), I can see where Eliezer got a lot of his stuff on evolutionary psychology from.
However, all authors must walk a thin rope between appeasing the Carl Shulman’s of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own. I think he in general does a good job of erring on the side of more complexity, which is what I appreciate, so I of course forgive him. :)
A niche that a good author might consider filling is actually including the numbers of the experiments they reference, ie, the experimental scores and their standard errors, etc. It might turn off the innumerate but I think that pure numbers and effect sizes are grossly under reported by science writers.
Nice history. These are successful because they offer ways to test dependent variables of subject’s opinions rather than their actual opinions. There are other ways to do it, including having subjects do word unscrambling exercises while varying the type of word they have to unscramble and then measuring their actual behavior.
How about Terror Management Theory? By supporting a cause that is probably going to win anyway, we gain little. But by supporting an unlikely cause such as Leonidas at Thermopylae, there is an increased possibility that if we succeed our accomplishments will live on past us, because it is so incredible. In this way, we would become immortal. One prediction from this explanation is that the greater the disparity between the underdog and the overdog the larger the preference towards the underdog will be, which seems to be backed up empirically (see the increased preference for Slovenia vs. Sweden in the referenced study).
I like this idea too. One prediction from it seems to be that those who feel less like underdogs (such as a Saudi Prince) will support underdogs less. One might find those who feel less like underdogs viageneral socieconomic status too, but since we have a fairly egalitarian society high income people might actually be more likely to have considered themselves an underdog during their formative years.
If everybody in the tribe has this adaptation, then it will no longer be useful because everybody will be supporting the underdog. The optimal strategy, then, is not to support the underdog per se but instead to support the cause that less people support, factoring in the rough probabilities that both Zug and Urk have to win. How would this yield a systematic bias toward favoring the underdog? It would only occur if in the modern world we still suspect that the majority will favor the team more likely to win.
I upvoted because I agree on the meta level that it would be nice to have more diversity of ideas in this and most other communities. And I read all of the comments, so it obviously triggered a discussion that interested me.
I agree with Robin, however, that generalizations need not be prohibited—that is going too far. However, generalizations should whenever possible be made falsifiable.
Hey Colin, I enjoyed reading this, head land is definitely a useful paradigm. In fact, it’s so useful that it enables me to share two of my favorite experiences from head land:
1) Two characters in my head land will get into an argument or heated conversation over some point. Usually one of the characters is a future version of myself. What then happens is that one of the characters will make an extremely good point; quasi-irrefutable by the other. The person making the really good point is usually the aforementioned future version of myself. Then the other character will make the rejoinder that this point is irrelevant because this conversation is only happening in my head and is unlikely to ever occur in real life.
2) Two people will be having a conversation in a future scheme, and one of them is me. The conversation is going quite swimmingly, often full of only weakly deflected praise for my character. And then I remark how it is quite odd that this conversation is actually happening because it is exactly the same conversation that I had once envisioned having in my head. And then the other person will say, did you envision me saying this, too? And then I will say, yes.
Sometimes these head land occurrences make me laugh, sometimes they make me sad (because boys don’t cry!). One interesting question is whether people are spending comparatively more time in head land now than at other periods of history, and what the implications would be if the answer is yes.
This reminds me of a conversation from Dumb and Dumber.
Lloyd: What are the chances of a guy like you and a girl like me… ending up together? Mary: Well, that’s pretty difficult to say. Lloyd: Hit me with it! I’ve come a long way to see you, Mary. The least you can do is level with me. What are my chances? Mary: Not good. Lloyd: You mean, not good like one out of a hundred? Mary: I’d say more like one out of a million. [pause] Lloyd: So you’re telling me there’s a chance.
Good post.