I just leave handgun ammunition everywhere.
Spurlock
So You’ve Changed Your Mind
Merry Newtonmas LW. Have some rationalist music.
I applaud the sites that have blacked out and/or put up anti-SOPA messages. SOPA and PIPA are bad news, and the word needs to be spread.
That said, there are 2 very important differences between those actions and the hypothetical LW blackout:
1) The sites that are blacking out are by and large sites that could be directly and severely hurt by the legislation. This is why I consider it okay for Wikipedia to black out about SOPA, but would be furious if the site were to black out because the editors didn’t like some piece of immigration reform. They’re not simply choosing to use their status as a soapbox, they actually must defeat these bills if they wish to continue existing.
2) The community driven sites that blacked out (including Reddit and Wikipedia) did so only after a serious discussion with their userbase. LW falls into this category, but no such discussion has taken place. Community site = community decision.
So IMO, a LW blackout would be an arbitrary political stance on a non-particularly-related issue, and a total snub to the community since we weren’t consulted about it. I wouldn’t be too pissed, since SOPA really does need to be stopped, but I would definitely consider it tacky.
I suspect (perhaps “fear”) that, outside of very specific goal-oriented fields like entrepreneurship, this is more likely a symptom self-deception about our goals.
You tell yourself that your ultimate goal is, for example, to make the world a happier place. And so it is for this ultimate reason, that you decide to be a video game programmer. What a coincidence that you’re a video game enthusiast that always dreamed of making the next Mario Bros. What a coincidence that it happens to pay extraordinarily well.
And if someone points out that you could probably increase world happiness more by, say, donating some of that money to charity, naturally you can come up with some convoluted explanation of why this is not (at least provably) so.
I think even more so though, it happens on a small scale. When I’m working, I take breaks to cruise the internet. Ostensibly, to recharge and give my brain a break. While this is indeed what I’m doing, this explanation has usually run dry within 10 minutes. After this point, my actual goal has become putting off work because something else seems more interesting, and I’d be lying to myself to claim otherwise.
In short, we sometimes fall short of our “goals” because they’re actually not our goals. Canonically, this.
“Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It is shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad‘Dib knew that every experience carries its lesson”
Frank Herbert, Dune
If you’re Lucius at this point, how the hell do you now update your “Harry is Voldie” hypothesis?
On the one one hand, he just paid 100K galleons to save a mud blood girl. On the other hand, he spooked a dementor. On the other other hand, while that feat may be impressive, it’s certainly not anything the Dark Lord had been known to do previously. And is he consprasizing with Dumbledore, or against him?
Probably a very confusing time to be the Lord of Malfoy.
META: Meetup Overload
The most notable problem with Pascal’s Goldpan is that when you calculate the utility of believing a particular hypothesis, you’ll find that there is a term in that equation for “is this hypothesis actually true?”
That is, suppose you are considering whether or not to believe that you can fly by leaping off a cliff and flapping your arms. What is the expected utility of holding this belief?
Well, if the belief is correct, there’s a large utility to be gained: you can fly, and you’re a scientific marvel. But if it’s false, you may experience a tremendous disutility in the form of a gruesome death.
The point is that deciding you’re just going to believe whatever is most useful doesn’t even solve the problem of deciding what to believe. You still need a way of evaluating what is true. It may be that there are situations where one can expect a higher utility for believing something falsely, but as EY has touched on before, if you know you believe falsely, then you don’t actually believe. Human psychology does seem to contain an element of Pascal’s Goldpan, but that doesn’t make it rational (at least not in the sense of “optimal”; it does seem to imply that at some point in our evolution such a system tended to win in some sense).
At present the best we can do seems to be keeping our truth-determining and our utility-maximizing systems mostly separate (though there may be room for improvement on this), and Occam’s Razor is one of our tried-and-true principles for the truth-determining part.
Luke is doing an AMA on Reddit
I’d like to predict that whatever actually happened with Dumbledore and Narcissa, it will turn out to have been foreshadowed by whatever happened in Chapter 17 between Dumbledore and the chicken.
That is, I can’t actually figure out whether he seriously burned a chicken alive, made it look like he burned a chicken alive, or that actually is what a Phoenix looks like right before regenerating. But he appeared to set fire to a chicken, and I predict that he used essentially the same move on Narcissa, as suggested by the law of conservation of detail.
I don’t think its possible that he just whisked her away with Phoenix-travel, as this apparently doesn’t actually look anything like someone burning alive, viewed from the outside. But whatever he did with the chicken at least looked enough like burning to fool Harry:
The chicken’s beak opened, but it didn’t have time for so much as a single caw before it began to wither and char. The blaze was brief, intense, and entirely self-contained; there was no smell of burning.
As expected, subjects who received the wordings they had been primed to feel disgust toward judged the couple’s actions as more morally condemnable than other subjects did.
I would just like to point out that this seems like fantastic training material for Rationalist Boot Camp and related projects.
Is your studied, practiced, meticulously crafted rationality enough to overcome these really dumb post-hypnotic suggestions? Surely if you can’t convince yourself that your moral disgust is irrational in clear cut situations like these, your chances of tackling your own biases in more complex and emotionally charged issues are pretty slim.
Obviously there’s some disclaimer to be attached when talking about hypnosis, but still it seems like a hell of a starting point.
Not sure what the current state of this issue is, apologies if it’s somehow moot.
I would like to say that I strongly feel Roko’s comments and contributions (save one) should be restored to the site. Yes, I’m aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I’m assuming someone can contact him), I think his comments should be restored by an admin.
Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.
So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.
Interesting article about a study on this effect:
Dweck’s researchers then gave all the fifth-graders a final round of tests that were engineered to be as easy as the first round. Those who had been praised for their effort significantly improved on their first score—by about 30 percent. Those who’d been told they were smart did worse than they had at the very beginning—by about 20 percent.
Dweck had suspected that praise could backfire, but even she was surprised by the magnitude of the effect. “Emphasizing effort gives a child a variable that they can control,” she explains. “They come to see themselves as in control of their success. Emphasizing natural intelligence takes it out of the child’s control, and it provides no good recipe for responding to a failure.”
I don’t think Harry’s dark side is supposed to be limited to dark solutions, it just happens to be an ultra proficient problem solver. It may have dark tendencies by virtue of being an embedded copy of the mind of Voldemort, but there’s no obvious reason it can’t be used for good.
The beginning of this post (the list of concrete, powerful, real/realistic, and avoidable cases of irrationality in action), is probably the best introduction to x-rationality I’ve read yet. I can easily imagine it hooking lots of potential readers that our previous attempts at introduction (our home page, the “welcome to LW” posts, etc) wouldn’t.
In fact, I’d nominate some version of that text as our new home page text, perhaps just changing out the last couple sentences to something that encompasses more of LW in general (rather than cogsci specifically). I mean this as a serious actionable suggestion.
For the sake of constructive feedback though, I thought that much of the rest of the article was probably too intense (as measured in density of unfamiliar terms and detailed concepts) for newcomers. It sort of changes from “Introduction for rationality beginners” to “Brief but somewhat technical summary for experienced LWers”. So it could be more effective if targeted more narrowly.
“The mind commands the body and it obeys. The mind orders itself and meets resistance. ”
-St Augustine of Hippo
I have a similar experience whenever I find myself in a church nowadays (happens sometimes for social reasons), and I can say confidently that it’s steadily intensified as I’ve delved into rationality. As best as I can tell, what really makes me furious isn’t the speaking end, but the receiving.
It’s some combination of the social setting, the groupthink, and (what I imagine to be) the mentality of the individuals nodding along. When I sort of “put myself in their shoes”, it’s as though I can feel the biases and motivated cognition and self-deceptive signaling behavior and strawmen arguments and rehearsed evidence by which these people convince themselves of their beliefs (in both the “belief” and “belief in belief” sense), and that is what makes me furious. If I could, even in principle, stand up and cry out in frustration at what nonsense the minister is preaching, and reasonably expect people to notice it was nonsense once it was pointed out, I’d be fine. What I find intolerable is the self-crippling psychological defenses in the audience: you can’t help them, because they don’t want to be helped, and have gone far, far out of their way to remain beyond the reach of reality.
Unless I’m modeling them very incorrectly. But what little conversation on the subject I’ve had/heard with them doesn’t suggest this is the case.
Anyway, this just resonated with me because of the culture of non-criticism you mentioned Charlie cultivating. It has the same memetic defense structure: we should stand up and cry out against it, but in doing so we only guarantee that we will be shut out or dismissed. It’s a very frustrating situation, and perhaps that was a part of what you experienced as well.
I believe Snape’s “Sunk Costs” hangup is also alluded to in Ch 91:
“Do you intend to declare that your life is now a ruin and that there is nothing left for you but vengeance?”
“No. I still have—” The boy cut himself off.
“Then there is very little advice that I can give you,” said Severus Snape.
Eliezer Yudkowsky two-boxes on the Monty Hall problem.