The primary effect that reading this had on me was the change in state from [owning a cloak hadn’t occurred to me] to [owning a cloak sounds awesome; i am unhappy that i hadn’t thought of it on my own]
neq1
Error detection bias in research
Bayes’ rule =/= Bayesian inference
Beauty quips, “I’d shut up and multiply!”
In the first example, you couldn’t play unless you had at least 100M dollars of assets. Why would someone with that much money risk 100M to win a measly 100K, when the expected payoff is so bad?
How about spreading rationality?
This site, I suspect, mostly attracts high IQ analytical types who would have significantly higher levels of rationality than most people, even if they had never stumbled upon LessWrong.
It would be great if the community could come up with a plan (and implement it) to reach a wider audience. When I’ve sent LW/OB links to people who don’t seem to think much about these topics, they often react with one of several criticisms: the post was too hard to read (written at too high of a level); the author was too arrogant (which I think women particularly dislike); or the topic was too obscure.
Some have tried to reach a wider audience. Richard Dawkins seems to want to spread the good word. Yet, I think sometimes he’s too condescending. Bill Maher took on religion in his movie Religulous, but again, I think he turned a lot of people off with his approach.
A lot has been written here about why people think what they think and what prevents people from changing their minds. Why not use that knowledge to come up with a plan to reach a wider audience. I think the marginal payoff could be large.
Here is what he said prior to making the statement I quoted (to give you some context):
Take historical analogies. I believe that historical analogies are always wrong. This a long discussion, but, to me, the most dangerous thing about Chamberlain’s capitulation to Hitler at Munich is not the fact that Munich happened and it led to further Nazi aggression and so on and so forth, but that the example of Munich has been used to support thousands upon thousands of bad policies and inappropriate decisions. LeMay called JFK’s recommendation for a “quarantine” (that is, a blockade) in the Cuban Missile Crisis “worse than Munich”. Would nuclear war have been a better alternative? But nuclear war was averted by Kennedy’s policies. And thirty years later the Soviet Union collapsed without the need for nuclear war. Was LeMay right? I don’t think so. But again, the example of Munich was invoked to justify the invasion of Iraq. Appeasing Saddam, appeasing Hitler. The use of the Munich analogy does not clarify, it obscures.
The thing that I have been most surprised by is how much NTs like symbols and gestures.
Here are some examples:
Suppose you think your significant other should have a cake on his/her birthday. You are not good at baking. Aspie logic: “It’s better to buy a cake from a bakery than to make it myself, since the better the cake tastes the happier they’ll be.” Of course, the correct answer is that the effort you put into it is what matters (to an NT).
Suppose you are walking through a doorway and you are aware that there is someone about 20 feet behind you. Aspie logic: “If I hold the door for them they will feel obligated to speed up a little, so that I’m not waiting too long. That will just inconvenience them. Plus, it’s not hard to open a door. Thus, it’s better for them if I let the door close.” To the NT, you are just inconsiderate.
Suppose you are sending out invitations to a graduation party. You know that one of your close friends is going to be out of town that weekend. Aspie logic: “There is no reason to send them an invitation, since I already know they can’t go. In fact, sending them an invitation might make them feel bad.” If your friend is an NT, it’s the wrong answer. They want to know they are wanted. Plus, it’s always possible their travel plans will get canceled.
In each of these 3 examples the person with AS is actually being considerate, but would not appear that way to an NT.
Intuition Pump
Suppose 50% of people in a population have an asymptomatic form of cancer. None of them know if they have it. One of them is randomly selected and a diagnostic test is carried out (the result is not disclosed to them). If they don’t have cancer, they are woken up once. If they do have it, they are woken up 9 times (with amnesia-inducing drug administered each time, blah blah blah). Each time they are woken up, they are asked their credence (subjective probability) for cancer.
Imagine we do this repeatedly, randomly selecting people from a population that has 50% cancer prevalence.
World A: Everyone uses thirder logic
Someone without cancer will say: “I’m 90% sure I have cancer”
Someone with cancer will say: “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.” “I’m 90% sure I have cancer.”
Notice, everyone says they are 90% sure they have cancer, even though only 50% of them actually do.
Sure, the people who have cancer say it more often, but does that matter? At an awakening (you can pick one), people with cancer and people without are saying the same thing.
World B: Everyone uses halfer logic
Someone without cancer will say: “I’m 50% sure I have cancer”
Someone with cancer will say: “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.” “I’m 50% sure I have cancer.”
Here, half of the people have cancer, and all of them say they are 50% sure they have cancer.
My question: which world contains the more rational people?
“The Bridge”. There was one person who survived and said he changed his mind once he was airborne. My recollection of the movie is that most of the people who jumped had been wanting to die for most of their lives. Even their family members seemed at peace with it for that reason.
The first one is flawed, IMO, but not for the reason you gave (and I wouldn’t call it a ‘trick’). The study design is flawed. They should not ask everyone “which is more probable?” People might just assume that the first choice, “Linda is a bank teller” really means “Linda is a bank teller and not active in the feminist movement” (otherwise the second answer would be a subset of the first, which would be highly unusual for a multiple choice survey).
The Soviet Union study has a better design, where people are randomized and only see one option and are asked how probable it is.
“History is like the weather. Themes do repeat themselves, but never in the same way. And analogies became rhetorical flourishes and sad ex post facto justifications rather than explanations. In the end, they explain nothing.”
-Errol Morris
We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.
I think you said it better earlier when you talked about whether the reduction in incidence outweighs the pain caused by the tactic. For some conditions, if it wasn’t for the stigma there would be little-to-nothing unpleasant about it (and we wouldn’t need to talk about reducing incidence).
I agree with your general principle, but think it’s unlikely that blame and stigma are ever the most useful methods. We should be careful to avoid the false dichotomy between the “stop eating like a pig” tactic and fat acceptance.
Sandy’s husband is an asshole, who probably defends his asshole behavior by rationalizing that he’s trying to help her. He’s not really trying to help her (or if he is, he knows little about psychology (or women)).
Blame and judgment are such strong signaling devices that I think people rarely use it for the benefit of the one being judged. If it happens to be the best tactic for dealing with the problem, well, that would be a quite a coincidence.
--
I liked your post a lot, in case that wasn’t clear. I think you are focusing on the right kinds of questions.
wait, this isn’t well done satire?
You have to realize that a great number of things are discussed in these proceedings that the mind just can’t deal with, people are simply too tired and distracted, and by way of compensation they resort to superstition.
-- Kafka, The Trial
The body count argument annoys me, and it’s disappointing to see people like Hitchens use it. Whether or not there is evidence-based reasons to believe in god is a separate issue from whether people who do or do not believe in god do other stupid or immoral things. There are atheists, I’m sure, who reject god for completely irrational reasons and are generally irrational themselves. It matters not just what you believe, but why you believe it.
It would be nice if the top scoring all-time posts really reflected their impact. Right now there is some bias towards newer posts. Plus, Eliezer’s sequences appeared at OB first, which greatly reduced LW upvotes.
Possible solution: every time a post is linked to from a new post, it gets an automatic upvote (perhaps we don’t count it if linked to by same author). I don’t know if it’s technically feasible
But: “You can be a virtue ethicist whose virtue is to do the consequentialist thing to do”
So why do you still take vitamins? If you look at their Figure 2, there aren’t many studies that ‘favored antioxidants’, and some of those studies had low doses.
“A linear analysis assumes that if 10 milligrams is good for you, then 100 milligrams is ten times as good for you, and 1000 milligrams is one-hundred times as good for you.” That’s only true if the range of data included both 10 milligrams and 1000 milligrams. Linearity is only assumed within the range of data of the data sets.
The hockey stick approach seems too restrictive as well. Just use a p-spline.
There doesn’t appear to be statistician on the paper. This study really needed one. Using meta-regression to estimate a dose effect is challenging, especially when you don’t have access to the original data (just using aggregate, study-level covariates). In fact, the dose effect and the concept of study heterogeneity are conflated here.
I agree with you that it’s unclear what they actually did.
A rationalist sits down next to an attractive woman at the bar.
He asks “are you familiar with immediate reward bias?”
“No,” she responds.
“Well, people tend to place irrationally high value on immediate rewards, relative to future rewards. So, for example, they might prefer $50 today over $55 next week. This is a bias that a more rational person can take advantage of in trade negotiations. Unfortunately, I am an impatient person. With that in mind, I have an offer for you. If you agree to have sex with me ONCE tonight, I will agree to have sex with you TWICE next week.”