Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it—it’s guaranteed to work!
Relevant and amusing (to me at least) story: A few months ago when I had a cold, I grabbed a box of zinc cough drops from my closet and started taking them to help with the throat pain. They worked as well or better than any other brand of cough drops I’ve tried, and tasted better too. Later I read the box, and it turned out they were homeopathic. I kept on taking them, and kept on enjoying the pain relief.
Probably not. Try throwing a coin in a wishing well or lighting a dollar bill on fire for more effect.
In the regular-price group, 85.4% (95% confidence interval [CI], 74.6%-96.2%) of the participants experienced a mean pain reduction after taking the pill, vs 61.0% (95% CI, 46.1%-75.9%) in the low-price (discounted) group (P = .02). Similar results occurred when analyzing only the 50% most painful shocks for each participant (80.5% [95% CI, 68.3%-92.6%] vs 56.1% [95% CI, 40.9%-71.3%], respectively; P = .03).
… Even YOU miss the point? guess I utterly failed at explaining it then.
IF I could solve the problem I’m stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that’s double think and you have explicitly, loudly and repeatedly warned against and condemned it.
I’m baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.
… Even YOU miss the point? guess I utterly failed at explaining it then.
Eliezer is not “the most capable of understanding (or repairing to an understandable position) commentor on LessWrong”. He is “the most capable of presenting ideas in a readable format” AND “the person with the most rational concepts” on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.
Not quite. Having the right priors about other people’s likely beliefs, patience and humility are all rather important.
There are some people who I consider incredibly intelligent and who clearly understand the language that I basically expect to be replying to a straw man whenever they make a reply, all else being equal. (Not Eliezer.)
Each one of his sequence posts represents a concept in rationality—so he has many more of these concepts than anyone else here on LW.
(I just noticed there’s some ambiguity—it’s the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)
The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking.
It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it’s a placebo—although with diminished efficacy.
But that’s beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It’s not systematic; it relies on luck, and it’s prone to Martingale-type failures.
The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn’t exactly belief. It’s confidence, or affect, or some other mental condition closely associated with belief. If it weren’t, there’d be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.
The supposed equivalent version in HP:MOR… (I do not wish to speak for anyone else—feel free to chime in yourselves)
That scene was a clear example—to me—of TDT being successful outside of the prisoner’s dilemma scheme. In a case where apparently only ignorance would help, TDT can transcend and provide (almost) the same power.
Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it—it’s guaranteed to work!
I’ve been doing this for years, and it really does work!
(No, really, I actually have; it actually does. The placebo effect is awesome ^_^)
Relevant and amusing (to me at least) story: A few months ago when I had a cold, I grabbed a box of zinc cough drops from my closet and started taking them to help with the throat pain. They worked as well or better than any other brand of cough drops I’ve tried, and tasted better too. Later I read the box, and it turned out they were homeopathic. I kept on taking them, and kept on enjoying the pain relief.
Probably not. Try throwing a coin in a wishing well or lighting a dollar bill on fire for more effect.
http://jama.ama-assn.org/content/299/9/1016.full
… Even YOU miss the point? guess I utterly failed at explaining it then.
IF I could solve the problem I’m stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that’s double think and you have explicitly, loudly and repeatedly warned against and condemned it.
I’m baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.
Eliezer is not “the most capable of understanding (or repairing to an understandable position) commentor on LessWrong”. He is “the most capable of presenting ideas in a readable format” AND “the person with the most rational concepts” on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.
Agree. I wouldn’t go as far as to say he was worse than average at understanding others but it certainly isn’t what he is renowned for!
I though it was all just g factor + understanding of language.
Not quite. Having the right priors about other people’s likely beliefs, patience and humility are all rather important.
There are some people who I consider incredibly intelligent and who clearly understand the language that I basically expect to be replying to a straw man whenever they make a reply, all else being equal. (Not Eliezer.)
Eliezer has always come of as having plenty of those as well.
What does this mean?
Each one of his sequence posts represents a concept in rationality—so he has many more of these concepts than anyone else here on LW.
(I just noticed there’s some ambiguity—it’s the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)
They aren’t?!?
It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it’s a placebo—although with diminished efficacy.
But that’s beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It’s not systematic; it relies on luck, and it’s prone to Martingale-type failures.
The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn’t exactly belief. It’s confidence, or affect, or some other mental condition closely associated with belief. If it weren’t, there’d be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.
The placebo effect works if your brain thinks that you think that it will work, if I understood things correctly.
And yes, that I can’t reliably self delude, and even if I could it would be prone to backfire, is exactly what causes this to be a problem.
I’m decently sure that my brain does not store beliefs separately from confidence, affect, etc.
I thoguh that was exactly the point of the dementor sequence; that it was an impossible paradox.
The supposed equivalent version in HP:MOR… (I do not wish to speak for anyone else—feel free to chime in yourselves)
That scene was a clear example—to me—of TDT being successful outside of the prisoner’s dilemma scheme. In a case where apparently only ignorance would help, TDT can transcend and provide (almost) the same power.
Huh? Maybe we’re thinking of different scenes.