Can new comments be added by default to the bottom of the discussion instead of the top?
CronoDAS
Can we have signatures, like in a forum?
“Shut up and multiply.”
Well, basically, I just want to append
- Doug S.
to the end of each of my comments, so everyone knows that I comment under that name in other places.
Regarding “rationalists should win”—that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
For example, buying lottery tickets is (almost always) a negative EV proposition—but some people do win the lottery. Was it irrational for lottery winners to have bought those specific tickets, which did indeed win?
Given a sufficiently large sample, the most spectacular successes are going to be those who pursued opportunities with the highest possible payoff regardless of the potential downside or even the expected value… for every spectacular success, there are probably several times as many spectacular failures.
This article is a good explanation of information cascades:
http://www.starcitygames.com/magic/fundamentals/12201_Information_Cascades_in_Magic.html
On the other hand, maybe if papers weren’t anonymous, those with prestige could easily be given too much deference. How much Protection From Editors do they actually deserve?
Yes, it’s worth another post—I hadn’t heard that theory before.
::runs off to do some Google searches::
Some difficult work with Google revealed that the technical term is the “sociometer” theory—and it’s fairly recent (the oldest citation I see refers to 1995), which would help explain why I hadn’t heard of it before. It seems consistent with my personal experiences, so I consider it credible.
For more information:
Expected value?
I second this request.
Is it silly to be scared by a horror movie, since it’s just a made-up story? It seems like the same kind of effect—there must be some level of cognition that isn’t affected by the “no, it’s just a made-up story” filter and simply reacts to what’s going through your head and senses.
(For example, consider this classic “test of faith in science”: hold a pendulum’s bob up to your nose, release it, then let it swing back at you. Because energy is conserved, the bob won’t break your nose on the return trip—but good luck overriding the reflex to move your face away.)
Eliezer, have you ever seen a hypnosis show, or been hypnotized yourself? Hypnosis is very much related to self-deception and “doublethink”; it seems to me as though it is, at least in part, a way of using one’s higher level functions to tell one’s perceptual systems to shut up and do what they’re told, to shut off various levels of fact-checking.
Why would a rationalist want to overrule one’s senses and critical facilities? Well, for one, turning off pain is frequently desirable, and it could be something fun to do in the bedroom… in general, it might be useful in any situation in which your brain isn’t doing what you want it to do.
This reminded me of something.
In the book Happiness: Lessons from a New Science by Richard Layard, the author goes into detail about how mood is strongly correlated with differential activation in the two hemispheres of the brain. The left forebrain is more strongly activated than the right forebrain when a person is happy, and the right forebrain is more strongly activated when a person is sad. (Ramachandran mentions that stroke victims with left brain damage frequently become depressed, while ones with right brain damage don’t.)
If the left brain interprets data through the perspective of current theories and the right brain forces theory revision, and left brain activation is associated with happiness and right brain activation is associated with unhappiness, what does that say about happiness and rationality?
I am opposed to the Santa Claus myth, mostly because I hate lying.
Mostly because it’s false, and I have a very powerful aversion to knowingly telling a falsehood (and to the general practice of doing the same).
I also hate to be lied to. I don’t like “white lies” and I refuse to tell them. If you ask me “Does this dress make me look fat?” I really will give you an honest answer—and I hope that other people will do me the same favor. If I didn’t want an honest answer, I wouldn’t have asked in the first place.
Yes. (It probably comes from playing Ultima IV during my formative years.)
I do admit to being a “truth twister” though—I won’t tell false statements, but I am willing to omit relevant information, imply false conclusions, or simply refuse to answer awkward questions. (And yes, I agree that there is a certain degree of hypocrisy involved in this practice, but it serves as a reasonable workaround for my inability to lie the way other people seemingly have no trouble doing.)
So I asked him, “In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?”
Obviously, you wait for one of the sick patients to die, and use that person’s organs to save the others, letting the healthy traveler go on his way. ;)
But that isn’t the least convenient possible world—the least convenient one is actually the one in which the traveler is compatible with all the sick people, but the sick people are not compatible with each other.
Well… there is the stock market, but that’s generally too much of a challenge; any edge you get disappears very quickly, so the best thing to do is “free ride” off of other people’s attempts to value stocks and just buy index funds (or the equivalent).
Other domains in which rationality can be tested are “intellectual sports” such as poker, chess, or Magic: The Gathering… it’s hard to test “rationality” in a way that doesn’t simply test intelligence or learned skills, though.
I read my father’s issues of Skeptical Inquirer magazine as a kid. So, well, I basically grew up in this kind of culture.
(I comment as “Doug S.” on Overcoming Bias.)