It’s a comment on one of Eliezer Yudkowsky’s Facebook posts. I got permission to post it here, as I thought it was worth posting.
Salivanth
The Courage Wolf looked long and slow at the Weasley twins. At length he spoke, “I see that you possess half of courage. That is good. Few achieve that.”
“Half?” Fred asked, too awed to be truly offended.
“Yes,” said the Wolf, “You know how to heroically defy, but you do not know how to heroically submit. How to say to another, ‘You are wiser than I; tell me what to do and I will do it. I do not need to understand; I will not cost you the time to explain.’ And there are those in your lives wiser than you, to whom you could say that.”
“But what if they’re wrong?” George said.
“If they are wrong, you die,” the Wolf said plainly, “Horribly. And for nothing. That is why it is an act of courage.”
HPMOR omake by Daniel Speyer.
Welcome to Less Wrong!
This is an old topic. Note the title: Welcome to Less Wrong! (2012). I’m not sure where the new topic is, or even if it exists, but you should be able to search for it.
I recommend starting with the Sequences: http://wiki.lesswrong.com/wiki/Sequences
The sequence you are looking for in regards to “right” and “should” is likely the Metaethics Sequence, but said sequence assumes you’ve read a lot of other stuff first. I suggest starting with Mysterious Answers to Mysterious Questions, and if you enjoy that, move on to How to Actually Change Your Mind.
In that case, I pre-commit that if I win, I’ll spend it on something leisure-related or some treat that I otherwise wouldn’t be able to justify the money to purchase.
I co-operated; I’d already committed myself to co-operating on any Prisoner’s Dilemma involving people I believed to be rational. I’d like to say it was easy, but I did have to think about it. However, I stuck to my guns and obeyed the original logic that got me to pre-commit in the first place.
If I assume other people are about as rational as me, than a substantial majority of people should think similarly to me. That means that if I decide that everyone else will co-operate and thus I can defect, there’s a good chance other people will come to the same conclusion as well. The best way to go about it is to pre-commit to co-operation, and hope that other rational people will do the same.
Thanks for the chance to test my beliefs with actual stakes on the line :)
I wanted to thank you for this. I read this post a few weeks ago, and while it was probably a matter of like two minutes for you to type it up, it was extremely valuable to me.
Specifically a paraphrase of point B, “The point where you feel like you should give up is way before the point at which you should ACTUALLY give up” has become my new mantra in learning maths, and since I do math tutoring when the work’s there, I’m passing this message on to my students as well.
So, thank you very much for this advice.
The main technique I used was bypassing the “trying to try” fallacy, as well as some HPMOR-style thinking; Obstacles mean you get creative, rather than give up. The most important thing was just not giving up upon finding the first reasonable-sounding solution, even if it’s chances of success wasn’t particularly high.
As to how I applied it, that was the best part, and what the second paragraph alluded to; it was my default response, to the point where I was briefly stunned when my friend was throwing up easily circumventible roadblocks to my ideas as if they were impossible obstacles. (And I did talk to him, in case he had other motives for wanting to not do the plan and was thus actively trying to come up with reasons not to do it.)
It was only then that I reviewed my own thinking and realised how far I’ve come since I first found HPMOR and LessWrong; I’d ceased to think of this particular method as unusual, I thought it was how any intelligent person attempted to solve their problems, but my friend matches me intellectually.
If you meant “how” as in specifics; my friend needed to earn extra money, and his reasonable-sounding solution was to find employment, despite the poor prospects for it in his area, and despite the fact that he’d looked before and hadn’t found anything. To him, the solution stopped at there, because it could work, whereas that didn’t meet my goal of solving my friend’s problem on it’s own due to it’s unreliability. So I helped him leverage some of his other talents, in addition to looking for work. (Which is a good plan, just not sufficiently reliable on it’s own.) None of my ideas were particularly brilliant, but I wouldn’t have found them if I’d stopped at the reasonable-sounding solution and decided that was sufficient effort for victory.
Honestly, it’s still weird to me right now. I was actually embarrassed writing this comment, because writing it out made it seem so trivial and not worth being proud about, and I had to remind myself that if it really was that obvious, my friend would have done it himself. Not to mention that a couple of years ago I’d have done the exact same thing in his position.
I got to use rationality techniques to not only solve a friend’s problem that had been ongoing for months, but also managed to completely change the way he thought about problem-solving in general. Not sure if that second part will actually stick.
On a related note, that was when I found out that I’ve internalised the basics of how to REALLY approach a problem with the intent of solving it, to such a degree that I’d forgotten that my thought process was unusual.
How’d it go?
EDIT: My bad, I thought this was posted on 22 January 2013, not 22 January 2012. I’ll leave this up just in case though.
What I’ve found that the spoilt version of Nethack tests, more than anything else, is patience. Nethack spoilt isn’t about scholarship, really. You don’t study. You have a situation, and you look up things that are relevant to that situation. There is a small bit of study at the beginning, generally when you look up stuff like how to begin, what a newbie-friendly class/race is, and how to not die on the second floor.
But really, it’s patience. I once did an experiment where players who were relatively new to Nethack were encouraged to spoil themselves as early and often as possible, and request advice frequently from better players. Really, to do anything short of having someone else play the game for you was not only allowed, but actively encouraged. Since I usually put a limiter on how willing I am to spoil myself on roguelikes, I thought this might be fun. (Namely, I’m unwilling to ask for any advice in tactical situations, only strategic ones: Which area should I go to next, instead of “How do I kill this ogre?”)
Conventional wisdom for Nethack states that upon reaching the halfway point of the game, you should win from there if you play correctly. I got about three-quarters of the way there, on my third run, having never gotten past the second floor on my runs prior to those three. I died to a misclick, not to lack of knowledge or poor tactics. So, patience is the true virtue of Nethack: It’s surprisingly easy to win as long as you spoil yourself, get advice, and don’t screw up.
Sadly, the experiment only had the one participant actually try it, namely me, so the evidence shall remain anecdotal.
Oh, no, I have no problems with people spoiling themselves for Nethack. That’s pretty much the only way to actually win. But if your aim is to improve rationality, rather than to do as well as possible within the game, it might be better to play it unspoiled. After all, Morendil mentioned “hypothesis testing” as something that was taught by Nethack: The spoilt version doesn’t really test that.
I’m assuming this only applies if you aren’t using spoilers for NetHack?
I’m not sure about it’s rationality testing or improving abilities, but I find it very fun :)
But this is a rather interesting example of rationality at work. It’s useful for a couple of reasons.
1) There’s a clear indication here of incorrect beliefs leading to unwanted consequences. In this case, a downplay of the importance of cup holders is leading to the loss of profit that could otherwise be gained.
2) It’s fairly trivial and simple, which is actually a good thing in it’s favor. It’s not technical, meaning we can all understand what’s going on, and it’s extremely unlikely anyone is going to have an entrenched belief about cup holders already that makes rational discourse more difficult.
The simplicity of the example is a point in it’s favor. We’re not attempting to fix the cupholder problem here, we’re looking at explanations of why it might exist in order to improve our model of things.
Thank you. I apologise for not asking you for verification sooner. My downvote is revoked and I’ve upvoted your post.
I learnt that I should have asked for verification sooner, either immediately, or as soon as you informed me you had reasons for wishing to keep said verification private. I also learnt that I should assign a higher initial probability to claims made by LessWrong members I don’t know, which is a lesson I’m very glad to have learnt, since I do enjoy trusting people.
You’re right.
In this case, assuming immortals had perfect memories and would eventually work out that you didn’t, assuming you were an immortal who can’t remember if you’ve played that particular opponent before (But can vaguely remember an idea of how often you get defected on vs. co-operated with by the entire field) what do you think your optimal strategy would be?
Okay, I’ve sent a PM asking you for verification.
I never actually claimed you were making this up, merely that the likelihood of your story being true was low. You inventing the story is only one possible reason why your story might be false. You could also simply be mistaken, have witnessed actions that looked much worse out of context (For example, maybe your friends did something to deserve their treatment, but didn’t tell you because it would make them look bad) or some other reason I haven’t thought of.
In addition, you ask why I care so much about lack of transparency when I can think of reasons why you’d want to keep information private. You gave none of this information in the original post, so if I were to come up with potential reasons why you might want to keep the information secret, I’d be rationalising.
With that in mind, evidence that your story is false:
The prior probability of your claim is low. Not extremely low, but as when making any claim that isn’t obvious, the “burden of proof” is upon you. (Naturally, I don’t expect PROOF, hence the inverted commas, but you do need to provide sufficient evidence to overcome the initial low probability.)
You claim to have references, yet don’t provide them in the initial post or explain in the initial post why you won’t publicly provide them. (Yes, you’ve given me an explanation now, which reduces the strength of this evidence, but does not eliminate it.)
I have been unable to find any collaborating evidence for your story.
The reaction on LessWrong, a site where the average member tends to be at least somewhat rational and probably at least as rational as myself, if not more so, is nearly universally negative.
You’ve failed to provide verification. You claimed your story was easily verified, yet there’s a conspicuous absence of any verification. Unlike your references, if your story is “easily verified”, that means it’s verifiable using public knowledge, and you haven’t provided that knowledge. (If the story is verifiable by asking you, that does not count. You’re asking us to verify the trustability of a source by asking that same source.)
Evidence that your story is true:
You said it is. (Let’s start with the obvious here.)
Lack of discernable motivation for lying.
Consquences if you’re wrong, which you seem to care about. (Loss of karma/status in the group.)
You’ve been around for a while.
Decent chance people on LW would call you out on it if you were lying. (Thus making you less likely to try and fool people.)
In the end, the evidence for it being false is simply stronger. You’ve failed to overcome the burden of probability you’ve shouldered by making the claim. In order to overcome this burden, more evidence is required. Hence why I asked you to show the easy verification you claim exists, and post your references. If you have a good reason to not do the latter, at least do the former, and if you have a good reason not to do THAT as well, you’ll just have to resign yourself to not being believed here.
That’s how it looks like from your perspective. From a reader’s perspective, it looks like someone who isn’t a notable community figure on LessWrong (At least, I assume this, based on your karma scores and the fact that I have never heard of you. If I’m wrong, I apologise.) has suddenly made a claim with a significant burden of proof on it, and not provided any concrete evidence, despite apparently sitting on some. “I have evidence but am not going to include this in this post, nor will I explain why I cannot include the evidence in this post.” is an immediate red flag.
Additionally, I refer to Michaelos’s point, who puts it better than I can. You’re accusing someone of being insane, but your post comes off as not being all that serious, with the “lol”s interspersed.
Lastly, you claim that your story is easily verified, but some Google searches have turned up absolutely nothing even tangenitally related to your claim except for this thread. If it’s easily verified by external sources, I haven’t been able to see it.
So, if it never occurred to you that your story would be doubted, you’ve obviously made a mistake somewhere. Your evidence in favor of you telling the truth (You’ve been on LW for a while, you’re opening yourself up to falsification, you have no known reasons to start attacking this person) is simply nowhere near sufficient.
That said, you can still fix this. Clearly, you were wrong about the likelihood of people doubting you, but everyone makes mistakes. So post your evidence, link us to somewhere that verifies your story, and I expect the problem will be solved.
If you have references, and you want to get potentially helpful information to rationalists, why on earth would you not just post these references to begin with? If you have a good reason for not making the references public, why didn’t you say so in your initial post?
I believe this lesson is designed for crisis situations where the wiser person taking the time to explain could be detrimental. For example, a soldier believes his commander is smarter than him and possesses more information than he does. The commander orders him to do something in an emergency situation that appears stupid from his perspective, but he does it anyway, because he chooses to trust his commander’s judgement over his own.
Under normal circumstances, there is of course no reason why a subordinate shouldn’t be encouraged to ask why they’re doing something.