You forgot to mention—two weeks later he and all other humans were in fact deliriously happy. We can see that he at this moment did not want to later be that happy, if it came at this cost. But what will he think a year or a decade later?
Robin_Hanson2
Alas, for most audiences I think you would find no one laughing even after an entire applause light speech.
Most theorists think they have the right theory but are wrong. So just because Einstein was right, that doesn’t mean he had good reason to believe he was right. He could have been a lucky draw from the same process.
I’m afraid all the bets like this will just recover interest rates, which give the exchange rate between resources on one date and resources on another date. The interest rate combines both preferences to have stuff in some dates over others, and beliefs about whether people will actually have to make good on their promises to pay in the future. The problem is that it is very hard to disentangle these two effects.
- 25 Sep 2012 1:09 UTC; 10 points) 's comment on [link] Betting on bad futures by (
- 4 Nov 2011 12:14 UTC; 6 points) 's comment on Selection Effects in estimates of Global Catastrophic Risk by (
- 13 Aug 2013 9:27 UTC; 0 points) 's comment on Bets on an Extreme Future by (
To continue with this metaphor, it seems what we need is a good set of problems to test our rationality (i.e., ability to resist common biases). As with any cognitive test, the details of the test can’t be known in advance, or people would just memorize answers instead of developing skills. So we need to collect a rather large set of good test questions, or create a generators to create sets of them automatically. Not easy, but perhaps worth doing.
Very well written, as usual. But many other modern institutions have analogous ancient institutions that look rather silly by modern standards. Consider trial by combat in law, or ancient scholastic obsessions with the “true” meaning of ancient texts. If lawyers and academics can disavow these ancient practices, while still embracing a true essence of law or academia, why can’t religious folks disavow ancient religious practice in favor of some true essence that makes sense in modern terms?
Less popular choices must give advantages to compensate for their unpopularity, but that doesn’t mean they are “better.” Many a small religious sect is bound together all the stronger for being a persecuted minority, and that bond may well the be advantage they seek.
Sadly, I almost always surprise economics graduate students looking for topics to research when I ask them; “What question, where you do not know the answer, would you most like to answer?”
This story is related to the phenomena whereby the most intelligent and educated religious folks are very careful to define their beliefs so that there can be no conflict with observations, while ordinary people are more prone to allow their religion to have implications, which are then subject to challenges like Eliezer’s. It is fun to pick holes in the less educated views, but to challenge religion overall it seems more honest to challenge the most educated views. But I usually have trouble figuring out just what it is that the most educated religious folks think exactly.
- 8 May 2015 5:12 UTC; 1 point) 's comment on Rationality Reading Group: Fake Beliefs (p43-77) by (
Imagine a person who declares they are not blue or green, but are just trying as best they can to judge the truth, and their estimate will change with time and context. Imagine that their behavior appears to fit this description. How will other people treat this person? Will they seek him out as a source of good policy information? Will they hire him or fund his work? Or will they suspect him of really being on the other side, and pretending neutrality as a rhetorical trick?
Can anyone arrange to get money to this man or his family? I’m tempted to donate, to honor his deed.
Don’t forget that this bias is reinforced by incentives, as managers tend to prefer overconfident planning forecasters to accurate ones.
This reminds me of so many stories whose explicit moral is “never give up.” The hero keeps trying after everyone told him to quit, and in the end he succeeds, and the audience comes out of the theater reaffirming the value of hope. But, in real life, what a terrible thing to teach people.
The cynic’s conundrum is that while a cynic might prefer that others believe an idealistic theory of his cynical mood, his own cynical beliefs should lead him to believe a cynical theory of his cynical mood. That is, cynics should think that rude complainers tend to be losers, rather than altruists.
- 4 Jan 2011 22:38 UTC; 6 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 6 by (
This is a good post, and it suggests a whole series of similar posts: take each of the cues people treat as signs of rationality, and dig deeper to look for when exactly rational people would or would not display those signs. Watch out for people proud to display the cue even when rational people would not have it.
It would be very useful if we could have some idea of how often people make big mistakes. Then it would be suspicious if we had gone on much longer than this time thinking we had not made such a mistake.
“Look who thinks he’s nothing”—funny. :) Perhaps more general version of your point is beware of taking pride in subgoal measures of accomplishment, if subgoals without the goal are worth little.
Are you sure this isn’t the Eliezer concept of boring, instead of the human concept? There seem to be quite a few humans who are happy to keep winning using the same approach day after day year after year. They keep getting paid well, getting social status, money, sex, etc. To the extent they want novelty it is because such novelty is a sign of social status—a new car every year, a new girl every month, a promotion every two years, etc. It is not because they expect or want to learn something from it.
Eliezer, imagine you knew two people who both did embarrassing stupid things when they were young, and that one person you excused with “boys will be boys” or “the folly of youth”, while the other you told to anyone that would listen that you would never trust or associate with a person who did such a terrible thing. This would seem to be playing favorites, unless perhaps the difference is that one person repented of their youthful acts while the other did not.
Similarly, you seem to be playing favorites in allowing lawyers and academics to disavow their silly ancient practices, while insisting that religious folks today take responsibility for ancient foolish religious claims. Sure your criticism sticks to those who refuse to disavow those ancient claims, but I think we should treat differently those, like Unitarians, who to do so disavow.
My main problem is that I find it hard to understand what such people are in fact claiming. At least I understood the ancient foolish claims, mostly.
I have had this experience several times in my life; I come across clear enough evidence that settles for me an issue I had seen long disputed. At that point my choice is to either go back and try to persuade disputants, or to continue on to explore the new issues that this settlement raises. As Eliezer implicitly advises, after a short detour to tell a few disputants, I have usually chosen this second route. This is one explanation for the existence of settled but still disputed issues; people who learn the answer leave the conversation.