That I would like to share. I recently found it on the blog Writings by James_G. I am going to add some emphasis and commentary of my own, but I’m mostly interested how other LWers see this. The main topic of the post itself is about politics and cooperation but I want to emphasise that isn’t the topic I’d like to open.
...
So, neurological egalitarians like (I should imagine) Zachary and neurological racist-authoritarians like myself need to be able to cooperate. Unfortunately, politics is the mind-killer.
No wait—that can’t be true. I’m writing this highly political essay, and my mind ain’t killed (Aberlour notwithstanding). This is the problem with Yudkowsky: he’s right so often, that the odd misfire goes unnoticed.
People go funny in the head when talking about politics.
Close, but no cigar. People go funny in the head when their emotions are aroused, and “political” arguments tend to be provocative. Thinking about and discussing politics doesn’t always evoke strong emotions; strong emotions can be evoked by things other than politics. Politics and out-of-control emotions are closely related, but here Yudkowsky didn’t cleave reality at its joints.
Yudkowsky’s rationalist forum, lesswrong.com, is based on the idea that politics is the mind-killer. When someone comments on what he considers a political subject, he apologises for dropping a mind-killer. Political arguments are taboo. The forum also has a karma system: every post and comment is subject to anonymous positive and negative ratings from other users. This is especially effective because of the forum members’ high regard for LessWrong’s majority opinion; negative karma is an assault on one’s soul. Given the high quality of the founding population (Overcoming Bias commenters), these features make LessWrong an unusually civil place.
Yeah it kind of can feel like that. Consider the strong reaction and even written out objections people have when down voted. Yet I think we should be doing moredown voting.
So, is LessWrong an exemplar for efficient cooperation across the neuropolitical divide? I don’t think so.
First, enforcing the no-politics taboo isn’t straightforward. “Politics” is an ill-defined term. It means roughly, “ideas and arguments associated with governance, how people should live, and decisions that significantly affect many people’s lives”. A LessWrong thread about the irrationality of Keynesianism and fraudulence of Keynesian economists would be highly political—seditious. But a (quite interesting) thread about Awful Austrians isn’t political, because Austrian economists are marginal. Austrian theory isn’t influential and might never be, therefore attacking it doesn’t seem political in everyone’s eyes. In this way, no-politics can easily become no-political-opinion-that-isn’t-mainstream—not a recipe for rationality.
Another problem is that the scope of “mind-killing arguments” is embarrassingly wide. For example:
I’ve recently read a lot of strong claims and mind-killing argumentation made against E.Y.’s assertion that MWI is the winning/leading interpretation in QM. The SEP seems to agree with this, which means I’ve got a bottom-line here to erase since both of my favorite authorities agree on that particular conclusion.
If arguments about quantum mechanics are mind-killing, what isn’t? Is arguing in general taboo? That isn’t rational.
Emotion is the mind-killer, so an apolitical argument could kill a nerd’s mind. For example, his opponent might insinuate that only rubes take the Copenhagen interpretation seriously. Being insulted, or simply losing an argument can stimulate emotions. But a rational person learns not to let anyone kill his mind (and to be a skilful mind-assassin when it suits him). To describe every firm clash of opinions as “mind-killing” is a self-fulfilling prophecy.
Emotions may have evolved to permit ignorant humans to practise timeless decision theory in situations requiring reciprocity and deal-making, like the Parfit’s Hitchhiker thought experiment. “Emotion” signifies a shift in the balance of mental sub-agents, which induced TDT behaviour in fecund ancestral humans. If the brain in question is a moral realist, it rationalises these emotions using moral projectivism: “I responded like that because he was morally wrong”. This epistemic error obstructs the displaced sub-agent from regaining control; moral realism legitimises upstart sub-agents.
Some emotions don’t prompt moral rationalisation. The excuse for odd behaviour associated with mating is, “I love X”. Love is nonetheless another instance of TDT pre-commitment; but since mating is a private interaction, unlike morality the (degenerate) rationalisation for the emotion of love need not act as a common currency for collective negotation and deal-making. Whether or not TDT considerations fully explain the evolution of emotion, we know that emotion “kills minds”—it promotes upstart sub-agents—and we can identify its causes.
Internet fora are provocative. Anyone can comment; even if 9 out of 10 discussants are reasonable, there’s always a jerk. The low bandwidth of internet discussions also causes problems. In meatspace, body language, tone of voice and familiarity allow people to respect one another’s emotional limits; internet interlocutors inadvertently upset one another. LessWrong’s karma system is also subtly infuriating. Outside cyberspace, nobody can snipe someone’s reputation with the impunity of the anonymous, silent downvoter. In real life, not everyone’s opinion is equally status-enhancing or -detracting, and every off-hand comment isn’t susceptible to meticulous scrutiny. Unwarranted downvotes—and jerks’ downvotes are indistinguishable from anyone else’s— are the Jim Jones of mind-killing.
LessWrong does a great job of maintaining civility; a more polite, entirely open internet forum I cannot imagine. But the costs of the no-politics taboo and karma system—entrenching mainstream ideas, stifling discussion of important problems, and creating effete rationalists—are unavoidable, and gradual dissipation of the highly rational, open-minded Overcoming Bias founding group may exacerbate these downsides.
A completely open forum, however effective the karma system and informal rules, doesn’t permit neurological leftists and rightists to cooperate and discourse efficiently. Still, internet fora are a great means of exchanging information. To confine useful discussion to email and glacial blogospheric exchanges isn’t ideal. We need a way to discuss politics honestly, without emotional turmoil. I propose two things: a protocol, and a forum design.
The protocol is a formal way to conduct internet discussions, which minimises mind-killing. First, each discussant must state his utility function. “Humans” don’t have utility functions, but their sub-agents do. For example, I (speaking now) am a hedonic utilitarian, which inhabits a brain populated by competing sub-agents. Refusal to state a utility function implies failure to accurately reduce the “I” in a statement like “I want to do X”.
Discussants whose utility functions differ substantially must accept that this is an impediment to cooperation. But the strongest sub-agent in an educated mind is usually a hedonic utilitarian. Ideally, all parties to a discussion claim to share the same utility function.
...
I’m not sure this protocol is workable. The full article is here.
I am of the opinion that “politics is the mindkiller” rule is bad because it allows some unrecognized aspects of mainstream politics (i.e. Universalism) to slip under the radar. Many Universalist ideas are ‘no longer politics’ in the same way that they are ‘no longer Christian’, allowing them to bypass any red flags that might be triggered with politics and religion respectively.
James_G responds: “Echoing Player of Games: my imagined forum design is federalist, like the style of government I favour; LessWrong exhibits democratic degringolade, as does today’s West.”
Also, the karma system on LW has all of the bad characteristics of demotism, and the fact that such a system of votes of equal value was chosen in the first place again seems to point to demotist bias. I would very much prefer moderation similar to the dictatorial one of Razib on GNXP.
Konkvistador (me): Razib’s harsh style does indeed create a comment section well worth reading.
My opinion is that groupthink is already quite strong on LW at this point in time; I am not sure how it was in the past. Their preference for philosophy and pure reason (rather than experimental science) is immediately obvious to me as an outsider; also, some of them seem obsessed with a few topics while losing sight of other important issues. I presume that is because of their (or should I say our?) highly atypical psychological profile: many Asperger types (some borderline, but many quite beyond that), and heavily risk-averse types. There have even been articles calling for the sabotage of scientific research into computing and AI as long as they consider their current pet obsession (Friendly AI) to be not 100% implemented in a safe manner. I for one believe it becomes impossible to achieve anything at all if you crawl into a hole due to fear of inadvertently creating paperclip maximizers. At some point, you have take some risks and go past analysis paralysis.
James_G responds: “I can’t fault this.”
Zack M. Davis criticizes James_G’s approach of viewing humans as a collection subagents:
The subagents idea is interesting, but it seems like a metaphor at best. That humans are an incoherent kludge of partially-conflicting values is indisputable, but to say that they meaningfully factorize into subagents seems like a much stronger claim; I don’t understand what is gained by speaking of a dominant hedonistic utilitarian subagent coexisting with ideological upstart subagents, when one can just say “I value (or ‘this brain contains parts that value’, &c.) pleasure, and antivalue pain, and I also value these-and-such political goals, but not quite as much as I antivalue pain.”
Is this a mere semantic quibble?—possibly, but near the end of your “Beyond Moral Anti-realism,” you seem to want to attribute your writings to your hypothesized hedonic utilitarian subagent, and in this post you write that “the strongest sub-agent in an educated mind is usually a hedonic utilitarian[;] [i]deally, all parties to a discussion claim to share the same utility function,” and it seems unnecessary; I don’t need to suppose that my values factorize in any particular way, nor disparage any of them as mere inferior upstarts, in order to be eager to cooperate and discuss ideas with smart, sane people who happen to like things I find distasteful or abhorrent.
Commentary on LessWrong and its norms
That I would like to share. I recently found it on the blog Writings by James_G. I am going to add some emphasis and commentary of my own, but I’m mostly interested how other LWers see this. The main topic of the post itself is about politics and cooperation but I want to emphasise that isn’t the topic I’d like to open.
Yeah it kind of can feel like that. Consider the strong reaction and even written out objections people have when down voted. Yet I think we should be doing more down voting.
There seems to be evidence that we indeed failing at this.
I’m not sure this protocol is workable. The full article is here.
In response to the blog post Nyk writes:
James_G responds: “Echoing Player of Games: my imagined forum design is federalist, like the style of government I favour; LessWrong exhibits democratic degringolade, as does today’s West.”
Konkvistador (me): Razib’s harsh style does indeed create a comment section well worth reading.
James_G responds: “I can’t fault this.”
Zack M. Davis criticizes James_G’s approach of viewing humans as a collection subagents:
I strongly agree with this.