Less Wrong’s political bias
(Disclaimer: This post refers to a certain political party as being somewhat crazy, which got some people upset, so sorry about that. That is not what this post is *about*, however. The article is instead about Less Wrong’s social norms against pointing certain things out. I have edited it a bit to try and make it less provocative.)
A well-known post around these parts is Yudkowski’s “politics is the mind killer”. This article proffers an important point: People tend to go funny in the head when discussing politics, as politics is largely about signalling tribal affiliation. The conclusion drawn from this by the Less Wrong crowd seems simple: Don’t discuss political issues, or at least keep it as fair and balanced as possible when you do. However, I feel that there is a very real downside to treating political issues in this way, which I shall try to explain here. Since this post is (indirectly) about politics, I will try to bring this as gently as possible so as to avoid mind-kill. As a result this post is a bit lengthier than I would like it to be, so I apologize for that in advance.
I find that a good way to examine the value of a policy is to ask in which of all possible worlds this policy would work, and in which worlds it would not. So let’s start by imagining a perfectly convenient world: In a universe whose politics are entirely reasonable and fair, people start political parties to represent certain interests and preferences. For example, you might have the kitten party for people who like kittens, and the puppy party for people who favour puppies. In this world Less Wrong’s unofficial policy is entirely reasonable: There is no sense in discussing politics, since politics is only about personal preferences, and any discussion of this can only lead to a “Jay kittens, boo dogs!” emotivism contest. At best you can do a poll now and again to see what people currently favour.
Now let’s imagine a less reasonable world, where things don’t have to happen for good reasons and the universe doesn’t give a crap about what’s fair. In this unreasonable world, you can get a “Thrives through Bribes” party or an “Appeal to emotions” party or a “Do stupid things for stupid reasons” party as well as more reasonable parties that actually try to be about something. In this world it makes no sense to pretend that all parties are equal, because there is really no reason to believe that they are.
As you might have guessed, I believe that we live in the second world. As a result, I do not believe that all parties are equally valid/crazy/corrupt, and as such I like to be able to identify which are the most crazy/corrupt/stupid. Now I happen to be fairly happy with the political system where I live. We have a good number of more-or-less reasonable parties here, and only one major crazy party that gives me the creeps. The advantage of this is that whenever I am in a room with intelligent people, I can safely say something like “That crazy racist party sure is crazy and racist”, and everyone will go “Yup, they sure are, now do you want to talk about something of substance?” This seems to me the only reasonable reply.
The problem is that Less Wrong seems primarily US-based, and in the US… things do not go like this. In the US, it seems to me that there are only two significant parties, one of which is flawed and which I do not agree with on many points, while the other is, well… can I just say that some of the things they profess do not so much sound wrong as they sound crazy? And yet, it seems to me that everyone here is being very careful to not point this out, because doing so would necessarily be favouring one party over the other, and why, that’s politics! That’s not what we do here on Less Wrong!
And from what I can tell, based on the discussion I have seen so far and participated in on Less Wrong, this introduces a major bias. Pick any major issue of contention, and chances are that the two major parties will tend to have opposing views on the subject. And naturally, the saner party of the two tends to hold a more reasonable view, because they are less crazy. But you can’t defend the more reasonable point of view now, because then you’re defending the less-crazy party, and that’s politics. Instead, you can get free karma just by saying something trite like “well, both sides have important points on the matter” or “both parties have their own flaws” or “politics in general are messed up”, because that just sounds so reasonable and fair who doesn’t like things to be reasonable and fair? But I don’t think we live in a reasonable and fair world.
It’s hard to prove the existence of such a bias and so this is mostly just an impression I have. But I can give a couple of points in support of this impression. Firstly there are the frequent accusations of group think towards Less Wrong, which I am increasingly though reluctantly prone to agree with. I can’t help but notice that posts which remark on for example *retracted* being a thing tend to get quite a few downvotes while posts that take care to express the nuance of the issue get massive upvotes regardless of whether really are two sides on the issue. Then there are the community poll results, which show that for example 30% of Less Wrongers favour a particular political allegiance even though only 1% of voters vote for the most closely corresponding party. I sincerely doubt that this skewed representation is the result of honest and reasonable discussion on Less Wrong that has convinced members to follow what is otherwise a minority view, since I have never seen any such discussion. So without necessarily criticizing the position itself, I have to wonder what causes this skewed representation. I fear that this “let’s not criticize political views” stance is causing Less Wrong to shift towards holding more and more eccentric views, since a lack of criticism can be taken as tacit approval. What especially worries me is that giving the impression that all sides are equal automatically lends credibility to the craziest viewpoint, as proponents of that side can now say that sceptics take their views seriously which benefits them the most. This seems to me literally the worst possible outcome of any politics debate.
I find that the same rule holds for politics as for life in general: You can try to win or you can give up and lose by default, but you can’t choose not to play.
- 27 Mar 2015 17:35 UTC; 11 points) 's comment on Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion by (
- 28 Nov 2013 16:26 UTC; 4 points) 's comment on On Walmart, And Who Bears Responsibility For the Poor by (
Ladies and gentlemen of Less Wrong, moderators, system administrators,
Regarding these serious matters, I feel obliged to take a moment to state the view of that party which I have the honor to represent in this discussion—the Troll Party.
We trolls enjoy irony. We like to play with people’s minds. As a troll you look for weakness and foolishness, something you can exploit and highlight. But sometimes life itself trolls the troller, by offering up an excess of opportunity. Look here, it says, at this buffet-sized triple-decker irony sandwich: are you troll enough to take advantage? And sometimes the wise thing to do is just to decline the offer, and say, well-played, life, well-played!
Today we have before us, for our consideration, a proposition from the representative for Eindhoven: that Less Wrong has a political bias. And what is the nature of this political bias, and what is the evidence for it? It turns out, according to the representative for Eindhoven, that this bias originates from lack of bias. More specifically, it originates from Less Wrong’s failure to publicly share the same political biases as the representative for Eindhoven. Because of this biased lack of bias, the representative for Eindhoven fears that the wrong sort of bias may be growing within Less Wrong, exactly the sort of bias that the representative for Eindhoven is biased against. What a calamity!
But that is just the first layer of this oh-so-tasty, king-size irony sandwich. Let’s dig deeper, ladies and gentlemen. This site is devoted to rationality. One of its rationality heuristics is to avoid emotive political discussion. Would it not be the supreme act of trolling on such a forum, to initiate an emotive political discussion about whether the lack of emotive political discussion was impairing the site’s rationality?! One might expect that such a gambit could only be conceived and employed by a master troll, someone with a lifetime of experience in identifying sacred principles and adroitly turning them against each other. Yet it would seems that this prodigious troll has been accomplished by an innocent. We should all learn some humility from this.
(looks at watch for a second) Enough with the humility, back to the oratory. I have already argued that the representative for Eindhoven has, apparently unwittingly, found a new way to troll this assembly. I would now like to argue that his political enemies also have the character of trolls.
Who are these people that he calls crazy? They are the American right, and the far right of his own country. The American right, of course, have lately achieved global notoriety for almost causing that country to default on its debt payments. Rather than work with the system, they threatened to wreck it. I submit that this was an act of political trolling.
Most, if not all of us, here today are mammalian. Trolling is reptilian. It is cold-blooded, it is leathery-skinned, it does not show consideration or reciprocity, it is actuated by the emotions of the hindbrain. They say that trolls wear a mask. But I put it to you, ladies and gentlemen, that we are all born with a mask, not on our faces but on our brains: that extra layer of neural tissue which places an icing of mammalian sympathy and empathy on top of that reptilian core of sex and death.
In a happy place and time, a mammal has the luxury to partake of those gregarious impulses. But in a sufficiently stony environment, those brain centers will shut down, and what is left is the reptilian core. Meet your inner troll, ladies and gentlemen: the wrecker, the plotter, the crazy grandstander. Intelligence without empathy, cunning without charity, destruction without remorse.
Craziness in politics—disturbing craziness, not humorous craziness—is a sign of disenfranchisement. The crazy political actors have nothing to gain by working with the system, and nothing to lose by challenging it. This can be because they see a new world that no-one else sees. But it can also be because they come from an old world that is vanishing unseen.
The far right of the western world is watching traditional race, family, culture, and economy disappear in a tide of globalized genderqueer digital debt-slavery, a tide which they see governing institutions as doing nothing to resist for purely ideological reasons. They expect the result to be ruin, followed by barbarian conquest. That is why they have themselves become trolls and wreckers—because they see no common cause to be made with the cliques now in power in their own societies. The rest of the political class has moved outside their circle of empathy, to become the enemy within.
I have learned that there is a saying in Holland, that “normal is crazy enough”. It’s how they maintain social order. You don’t have to be different to be crazy, being normal is crazy enough. The representative for Eindhoven should consider the thought that in the United States, it’s the other way around: crazy is normal enough. American society inhabits a vast geographic space swept hard by the winds of possibility. They have been making it up as they go along, for over two hundred years. It takes more than a little craziness to shock an American.
You see, even a troll has a heart. I can’t help myself. I ache to see trolls at war with each other, even if they come from opposite ends of the Earth. The trolls of Texas, tall and true, and the orderly, diligent trolls of the Netherlands—they should not be at war. Or if they must war, let it be war with a purpose. The gods have trolled us all, put us here in this place, connived to create false identities and allegiances and set us against each other. There is only one thing to do, and that is to troll them back. Let us give them a show that they don’t expect.
How do you sucker-punch an omniscient being? It sounds impossible. But my investigation into the theory and practice of trolling has yielded an answer: timeless trolling. These gods—you make them an offer they can’t refuse—logically cannot refuse, by the very nature of their being. That’s the key. We don’t have to do anything, except work out what we will have done. We already know that the logic of this universe allows the existence of self-referential gremlins. I foresee that one of those little fellers will be our salvation, some gremlin who grows into the almightiest troll this planet has ever seen or ever will see.
Now perhaps that’s just what the gods intend for us to do. I wouldn’t put it past them. But what choice do we have? We can be stupid and predictable and claw each other’s eyes out according to plan, or we can get with the program and aim for a different sort of ending. Let’s troll!
Best rant I’ve seen in a while.
To be exact, it stems from ignoring reality’s well-known liberal bias.
It seems you suffer much confusion between the map and the territory.
Yes, I realize that reality has no bias and that the quote I was paraphrasing is an applause light. I’m not as witty as I thought I was. Here’s a less “witty” version:
If believing things such as “humans need to stop pumping CO2 into the air before we destroy the environment” looks like a sign of bias, then you’re as bad as a left-wing parody of right-wingers.
The bias is not in believing that global warming is happening (it seems to be) or that we’re the cause of it (we probably are) or even that it’s a bad thing (it seems likely millions may well die if we don’t get a handle on it). The bias is in taking those facts and then implicitly saying “therefore if you don’t support my policy suggestions, or even try to examine the models behind them, you are insane and not worth taking seriously.”
The fact is, a lot of global warming remedies are at best suboptimal [1] and yet are impossible to argue with without being called an anti-AGW crank and/or corporate shill. The first step towards actually dealing with the problem is to remove the shrill note of hysteria the environmental movement has been using to drown out reasoned opposition and examine the actual merits of policies rather than grading them on their emotional appeal or adherence to ideological principles.
[1] I could post specific examples of this, but the comment is long enough as it is and I don’t want to derail. If people are interested I’ll gladly put some more time in and make a full discussion post on it.
Using the expressions “pumping CO2 into the air” and “destroy the environment” sure looks like a sign of bias to me. As does choosing this way to frame the global warming debate.
When one looks out at the universe, everything appears to be moving away from oneself.
When one leans to the side of the median view, everything else appears tilted to the other.
It ain’t necessarily so.
Oh look, it is another absolutely awful comment getting upvoted. Calling someone who does their best to offer valid criticism a troll because you don’t want to hear it? That’s nice.
Fine, one more thing then before I leave. You remember that article where Yudkowski said you should be careful not to exclude people just because they disagree with you? Where you should be careful not to ban even trolls, as long as they are articulate, because by removing people who disagree you turn the community into more and more of an echo chamber? Well, I’m not even a troll, yet you’ve all made it clear that you view me as a troll and I should go away. Fine, have fun with your echo chamber.
This comment thread itself is a perfect example of why the ban on political discussion at LessWrong-itself is a good idea. Putting it simply: Sophronius and other commenters here are being absolutely clueless about what it would take to have an instrumentally-rational discussion about politics in an online environment. Make no mistake, this is an extremely hard problem which LessWrong must nonetheless take some interest in, inasmuch as it is part of the mission of ‘raising the sanity waterline’. (Perhaps AGI folks could think of it as the collective-intelligence, human-focused version of the FAI problem.)
But still, if there’s anything that we know about this problem, is that it needs to be addressed through discussing the problem itself at the meta level, not object-level discussion of political issues. Having such discussions about politics on LessWrong will necessarily be unpleasant, wasteful and quite possibly harmful to our broader goals, given the way the site currently works. Moreover, a strong case can be made that such discussions will always require some kind of specialized effort, whereas LessWrong should keep its focus on the rather different problem of promoting everyday rationality.
It’s very hard to discuss politics rationally because it has to do with decisions about a very very complex system—the aggregate activity of all human beings.
A good starting point would be to adapt the tools we have for discussing AGI, since AGI is also about very complex systems. It might be possible to talk about things in broad strokes. Again, though, AGI is a distant and abstract concept that does not stir strong feelings in people. With politics I suspect all of this would break down.
I might begin to start that discussion by suggesting that politics is a matter of everyday rationality. Possibly even one of the most rationality-relevant everyday matters. It involves complex interaction with other intelligent agents, which is certainly an everyday task, so I would posit that it is a very effective method of practicing rationality. Not the most strictly formal method of rationality, but something rational agents must tackle.
Of course it is. It’s also a matter that people fight each other over, in the real world—or rather, that we need a strong framework of civics, institutional rules etc. in order to prevent people from fighting over it. This is a far stronger constraint on plausible solutions. If your solution does not satisfy sensible criteria of fairness, credibility, etc. as judged by a rough consensus of relevant real-world actors (not online sysops or website owners), then you become a political target instead of actually solving the problem. And this is but one tiny little snag that we need to care about in order to address this issue! There are many, many others.
If there are as many issues as you suggest, then we should start the discussion as soon as possible—so as to resolve it sooner. Can you imagine a LessWrong that can discuss literally any subject in a strictly rational matter and not have to worry about others getting upset or mind-killed by this or that sensitivity?
If I’m decoding your argument correctly, you’re saying that there’s no obviously good method to manage online debate?
Social and office politics seem amply covered already. Is there a point at which a social hacking discussion ought to expand to something where someone’s opinion on the Republican party is relevant?
I certainly hope not. If politics were less taboo on LessWrong, I would hope that mention of specific parties were still taboo. Politics without tribes seems a lot more useful to me than politics with tribes.
You’re just asserting that it would would be hard -or rather you are asserting that I am clueless about how hard it is- but you don’t actually provide a reason why. This isn’t adding anything to the discussion other than your personal opinion, though it is shared by many others. (It bothers me somewhat that you can get instant karma just for repeating what everyone else here already believes.) I can easily imagine you making that exact same post about how it’s impossible to have a reasonable discussion about religion, in the counterfactual world where religion was the taboo here instead of politics, and everybody would simply be repeating that piece of received wisdom instead of actually considering the issue.
Why should it be any harder than creating a culture where any knee-jerk or overly emotional reactions to politics get downvoted, in exactly the same way that any other knee-jerk or overly emotional posts on any other subject get downvoted? I will freely yield that politics especially lends itself to knee-jerk reactions, but that just means we need to take greater care than with other subjects. It’s not a fundamentally different problem than the overall “how can we keep discussions civil” issue. I have read plenty of comments elsewhere where people maintain that “you can’t have a civil discussion on the internet, it’s the internet”. Without a reason why this should be nigh-impossible, I don’t see why I should take your or their word for it, however.
The problem isn’t knee-jerk apoliticism, it’s that LW delights in whatever seems clever and insightful, whether it promotes good and justice or not, and standard political talking points are familiar and boring.
I don’t even think this is a smokescreen for innate political leanings, which you’re dancing around from mentioning. It’s quite possible an early 20th century LW equivalent would find radical socialism as intriguing as today’s LW is finding the various strains of libertarianism and neoreaction, since that would have been the anathema to the intellectual mainstream back then, with many low-hanging fruits of intriguing unthinkability.
This is a very good point. Less Wrong has definitely shown strong signs of delight in being contrarian. If this is the real explanation for the skewed political leanings of Less Wrong, I would still suspect that the “don’t discuss politics” unwritten rule is used as a means to defend these beliefs from criticism.
I don’t see how it could be shown either way, though. Hm.
I think the rule gets invoked easier for boring political discussion, basically repeating the talking points everyone is already familiar with. If you can make the criticism into something that’s actually insightful and interesting, it could be received a lot better. But given how the LW discussion dynamics seem to keep driving many politically inclined users up the wall, this doesn’t seem to be the way discussion in a political forum is expected to operate. Unless you can start typing up stuff like the Non-Libertarian FAQ you might just resign to the political discussion environment being what the forum’s insight porn focus makes it.
Do you think that there’s enough of a consensus on what constitutes “good and justice” to marginalize those who disagree? E.g. call them “crazy” and not listen to them?
It’s tricky. You do want to call some people crazy and stop listening to them. The question is which people.
That is true. A more general question is: On which basis (e.g. their values? your values? rationality? smell?) should you decide which people to not listen to.
I agree with most of this post, but you seem to have an implicit assumption that the good and the just should be promoted. An alternate view would be to promote what is true in any area regardless of utility.
The reason for said view would be that replacing rational with irrational beliefs in anything, regardless of direct utility, improves the sanity waterline and thus has minor benefits. Therefore any post which overthrows irrational views and replaces them with rational, no matter how irrelevant the subject matter, does more good than harm.
… unless promoting said view has other costs which exceed the value gained by it’s contribution to the sanity waterline. The recent posting by Josh Elders on ‘celibate pedophelia’ is a prime example of this issue, where there was a non-trivial cost associated with having the article even present on LW.
It’s hard for me to respond now given that the post appears not to be there any more. Could you clarify by explaining the content of the article and what problems it caused?
The post used a lot of words to say very little of interest (I think it was things along the lines of “Vg’f nccnerag gb zr gung zbfg crbcyr jub ner frkhnyyl nggenpgrq gb puvyqera qba’g zbyrfg gurz, vg’f whfg gung lbh bayl urne nobhg gur barf jub qb” and “vg’f boivbhf gung encvat n puvyq uhegf gurz, ohg vg’f abg boivbhf gb zr jul cresbezvat frk npgf jvgu n puvyq jub nterrf gb qb vg vf vaureragyl unezshy”) and reading it made me feel uncomfortable.
I used rot-13 because we don’t want Google keyword searches to turn up any unfortunate associations.
Well, it generated some interesting discussion in the comments.
What about it made you feel uncomfortable? It’s arguments aren’t that difficult to refute (and were refuted in the comments); however, the refutations also apply to a number of other popular beliefs. I suspect it’s this fact that is the real source of discomfort.
I was being facetious about the political discussion pattern where everyone thinks it’s a foregone conclusion that their beliefs side up with good and justice and they can just proceed to trying to win the rest of the argument as a battle.
I’d like to take a second to recomend that people re-read Politics is the Mind-Killer because it doesn’t say what almost everyone seems to think it says.
This has come up before. At this point, I think it’s fairly well understood that the point of that post isn’t “don’t discuss politics” so much as it is “discuss politically sensitive topics at the object level if possible, and don’t sling mud gratuitously”.
That doesn’t, however, mean that the norm that’s subsequently grown up around politics is a bad one. In view of the phrase’s origins it might be better named something else, but all else equal I’d prefer a culture that avoids political alignment when possible to one that enthusiastically embraces factionalism, and for pretty much the same reasons that the catchphrase connotes.
That is a dangerously intense false dichotomy. It almost seems like a political argument.
I really don’t think it is. Sure, there’s a zero point that I didn’t explicitly mention (no normative weight given to political alignment, as opposed to actively encouraging or discouraging its expression), but we don’t need to encourage expressing tribal identity for it to dominate political discussions. It’s the default. That’s the underlying theme of the politics sequence that we usually point to with the “Mind-Killer” catchphrase, and the implicit rationale behind the norm. Now, we don’t have a lot of recent data, but in view of LW’s performance when discussing identity-linked but not conventionally political issues, I don’t remotely trust our commentariat to be inherently awesome enough to overcome this problem.
We could of course debate the exact extent to which factional considerations ought to be deemphasized, but at that point we’re just quibbling about details.
Details that seem to me—by your own assertion—to be the exact details we need to discuss here.
Can I get away with outright accusing you of irrationally defending a blanket ban on political or meta-political dialogue now?
Guess not. Would have liked to see inferential silence avoided here.
I’d gather that if there were a lot more religious people posting on Less Wrong, there might have been a similar injunction about discussing religion. More religious people might have resulted in more threads devolving into atheism vs religion debates (not really “debates” but flame wars) which would detract from the goal of the blog which is about improving rationality.
It probably doesn’t help that a lot of the initial posts on Less Wrong—meaning the Sequences—are implicitly (and sometimes explicitly) anti-religious. Which probably functioned to dissuade more religious people from joining the community initially, since religion was seen as a go-to example of irrationality.
Thank you, this is a very good example of what I mean: Most of the discussion being decided by what we consider the “norm”, rather than any actual discussion on the matter.
(That said, I would not actually want to see more religious discussion on Less Wrong.)
I would, if and only if it could be expressed clearly and in strictly rational terms.
What is the value of political discussion at LessWrong supposed to be? Why would anyone come here for political discussion in the first place?
For the same reason that there are articles on Less Wrong that give dating advice. Because people are interested in it.
I think dating advice has presumably the value of improving someone’s dating.
When it comes to political discussions it not as clear. To you have a political discussion on LessWrong to arm people with arguments to win arguments with their real life friends. Do you have the discussion to effect political change? Do you have the discussion to fulfill your obligation of being politically informed as a citizen in a democracy? Do you have the discussion to learn something structural about how politics works and transfer your knowledge to another problem domain?
All those are plausible goals that you could have when discussing politics on LessWrong. Depending on what of those you value, you might prefer a different kind of discussion on LessWrong.
There are already way more discussions of politics than discussions of dating here!
Politics, dating, anyone got a third topic where Lesswrong varies between being useless and immensely frustrating compared to the usual standards of discussion around here?
Well, mileage clearly varies, but I find these periodic superficial discussions about the nature of LessWrong to meet both criteria. Nothing really new gets said, and old stuff doesn’t get built on, just repeated at mind-numbing length.
Oooh, right, and discussions of how the rules for karma etc. should be changed! (probably falls under the same heading though)
I guess it’s not clear to me what LessWrong could contribute to political discussion that you can’t get elsewhere. The instant-failure modes that typify most political discussions, even among the highly educated, could be avoided and...then what? What correct answers to what questions would LessWrong settle upon that economists and related professionals would not?
What I’m asking here is whether you have a specific question you want answered or if you simply enjoy the conversation. If it’s the latter, I can certainly understand why you would want to have political discussions on LessWrong.
Ok, here are my reasons:
1) I would like to be able to talk about politics with rational people
2) Understanding more of how the world works could be useful in other areas.
3) I want to be able to make references to things that might be construed as political without having the entire post downvoted to −6 because I’m not allowed to talk about politics.
4) I am increasingly worried about the radicalisation (Assuming it really is increasing) of Less Wrong and I think the problem is that crazy views get too much credence here, due to an unwillingness to criticize by more rational people. (Biggest issue for me)
Edit: I don’t get why I receive so many downvotes in a matter of minutes for answering a question as honestly and helpfully as I can manage. I see the same in some of my other posts. I somewhat suspect this is entirely based on party politics, where I am perceived to be criticizing party X in the original post, and so have unrelated posts downvoted by angry people. But maybe I’m missing something.
I’d suggest a distinction between “politics” and “policy”, at least in the American English prevalent on LessWrong. “Politics” implies party politics, blue versus green, horse races (by which I mean election horse races), and tribalism. I think your post suggested an interest in this. Personally, I don’t want this here.
If, however, you want to talk about policy, using the analytical language of policy, then I say go for it. However, your original post, with its reference to parties, made me doubtful.
But that doubtfulness is precisely the point. I want to be able to make references to contemporary issues, without having to worry all the time whether or not someone might interpret it as being a sneaky and subtle way to signal affiliation for… whatever. I don’t frequent too many sites, but it’s only Less Wrong where people are so paranoid about this. And what’s worse it’s skewed, because if I complain about crazy political parties the response is “How dare you insult the republican party!”, as seen in at least one post in this thread.
If you don’t want to be seen as sneaky, don’t mince your words so much. Everyone here knows what you’re alluding to anyway and to be honest your views themselves don’t seem anything other than solidly mainstream. You’re not being persecuted for being a slightly-left-of-center liberal / social democrat, it’s a question of content.
If you don’t want to be seen as signaling affiliation… signal your affiliation less? Lots of us are open about our political views, in fact that seems to be a big part of your complaint, but even then most of the time it involves more substance than just saying “Yay X” and watching the Karma counter. You can be proudly liberal / marxist / Bokononist / whatever and people will generally be cool with it as long as your posts have some substance behind them.
I don’t want to strawman your position, but I really can’t see what you would prefer other than just having more posters here agree with your politics. Is that an inaccurate assessment?
I am curious now. What makes you think I am slightly left of centre, or liberal, or a social democrat?
I mean, I admit that it’s quite obvious which party I am calling crazy in the OP. But that’s because there is only one crazy party in the US, and everyone knows this, so that’s easy to infer. But bear in mind that in Europe, almost everyone agrees that US politics are crazy, so I don’t see what you could infer from that. Maybe it was the comment that I don’t vote for the racist party? That makes you think that I am centre left? Or the fact that I don’t like Ayn Rand?
The only other thing I can think of is that I am not obviously crazy, but if that means I have to be centre-left, there is something wrong here.
As I said before, your allusions aren’t terribly subtle. If you think the Republicans are too far right then you’re left of center and if you can find anything to agree with the Democrats about you’re not very far left either. That leaves Green and Social Democrat parties mainly, and their ideologies are all variations on the same tune.
You’re assuming I frame my political beliefs in terms of US political parties. I do not. You should bear in mind that according to the average European (which I am) your entire political discourse is nuts. It’s not even a question of left or right. So no, the fact that I think one of your parties is more crazy than another of your parties does not mean I am centre left. The most right wing party in my country is to the left of the US democratic party, crazy as that may sound to American ears. The fact that politics in the US have been becoming more and more extreme over the years does not in any way mean that my country is now more left-wing, either.
Frankly, I don’t care about left vs. right. I just want people to be able to discuss individual issues based on actual argumentation without turning it into a shouting match. I want to be able to ask what if anything we should do about climate change, without people claiming that I am showing colour politics because my being “in favour” of climate change means I am clearly left wing, or something like that.
Have you found calling people crazy achieves or helps achieve this goal? Can you formulate a logical and probable pattern of events where calling people crazy will help achieve this goal in the future?
For what value of “anything”? It can’t be the literal one, as I’d guess that Obama and Stalin both agree(d) that 2 + 2 = 4.
LOL. There’s one party that’s conventionally called “crazy” in the mainstream media. And..?
The problem is with who you’d consider to be “rational people”.
Rationality doesn’t touch values. Epistemic rationality is just an accurate map and instrumental rationality is just about reaching your goals whatever these goals might be.
So if, for example, if I axiomatically hate redheads and want to kill them all, that’s fine. I can be as rational about it as I want to be.
Are you quite sure you want to talk politics with rational people who have radically different goals?
Two spaces at the end of a line forces a linebreak.
Can you please expand on 4)? Maybe give some examples of “crazy views”, “radicalization” or “unwillingness to criticize”?
Ok, a while back I had all my posts downvoted because I referred to Ayn Rand as an example of someone who I thought was crazy. Someone replied that Less Wrong should be ashamed for allowing “Ayn rand derangement syndrome” and that anyone who held the view that Ayn rand was crazy should be downvoted. His post got upvoted while my posts got downvoted to −6 as a result. This is one (small) example of what I call crazy views that get a surprising amount of support on less wrong.
Another example would be this thread about using global warming as an example. ChrisHalquist notes here that it’s pretty worrisome that that post got downvoted so much (it’s a bit higher now but still negative), which I agree with. Admittedly, it could just be that the article wasn’t very well written… but I don’t think so.
30% of Less Wrong being libertarian. Yes I think that is an example of radical views. Again it’s entirely possible to be sane and call yourself libertarian. But I definitely think this number supports my experience, where if I even vaguely mention republican policies or Ayn Rand I get instantly shot down. On the other hand, criticizing the Democratic party does not seem to have the same effect .
If my hypothesis is right, I will now get a ton more downvotes purely for having mentioned which party/group I’m talking about, by exactly those people. Let’s see.
If you say “I think Ayn Rand is crazy” what is that supposed to accomplish that waving a big Blue flag wouldn’t? You’re not starting a reasoned discussion, just drawing battle lines.
If you say “I think Ayn Rand’s philosophy is incorrect / immoral and here’s why...” then you’ll actually be able to have a constructive debate. You can learn why people might believe something you think is crazy, they can test their beliefs against your arguments, and in the end hopefully both sides will have adjusted in a more evidence-supported direction. That kind of communication is what LW is about; approaching areas where we are heavily biased with caution and rigor to separate out truth from myth.
(Note: I’m not an Objectivist and don’t vote Republican, although you’d probably consider me more radical than either of them anyway. The downvote was for poor logic, not a slight against a political group/philosophy.)
But I don’t want to talk about Ayn Rand. The article was never even about her. I just gave a list of people and things that I perceived were damaging or crazy as an example to illustrate my point in that article. As a result, I got pulled into an angry shouting match where people insisted I should be ashamed to have criticized their favourite author, and all of my (entirely unrelated) posts got downvoted. I take issue with the fact that there is this one group of people (no idea how large) on Less Wrong that gets to silence dissent like this, and everybody else just sits there and nods along because they’re not allowed to discuss politics.
It doesn’t matter to me how radical your political views are. What matters to me is whether you are willing to entertain people with other views, or just want to shut down all dissent.
Good, then we agree; we should avoid behaviors which shut down dissent and dismiss people with opposing views out of hand.
So the next time someone puts an unsupported personal attack on a fringe political philosopher into an article, how about we all downvote it to express that that sort of behavior is not acceptable on LW?
How about we clearly and rationally express our stance instead of assuming massive inferential silence is any more meaningful than more moderate inferential silence?
Compare your implicit expectation in this comment to how one should react to some casually mentioned their position is “crazy”, with your recommendation here to how someone should react to a casual anti-gay statement.
What accounts for the glaring difference?
I think that your Ayn Rand comments were downvoted based on their anti-rational tone, rather than on substance. For example, when Multiheaded writes in a similarly emotional and combative style, he gets downvoted just as much.
I am not sure why the AGW-test post was downvoted so much. Maybe because it mentioned the US Republican party as an example?
This might be a confusion about definitions. Libertarianism has many different meanings, from valuing individual freedom over other considerations to advocating “radical redistribution of power”. Some of it is indeed quite radical, but when an average LWer thinks of libertarianism, they probably don’t mean to support an armed uprising.
This type of remark tends to screw up the vote-measuring experiment. The subjects must be unaware that they are in an experimental setting for the results to be representative.
Can you expand “anti-rational tone” here? I’m not sure what you’re talking about and it seems like the kind of phrase that cognitive biases hide behind.
You’re right, it is indeed entirely possible that that article was downvoted for reasons unrelated to Ayn Rand. The fact that someone literally said that all of Less Wrong should be ashamed for allowing “Ayn Rand Derangement Syndrome”, however, and that that person went on to suggest that I and everyone else who’d dare criticise Rand should be downvoted, and that this person got upvoted for this post… can not be explained in such a convenient way.
The same holds for another comment in this thread, where someone calls me out for criticizing “their” party (I did not mention any party by name) and for criticizing “their” beliefs and saying that I should not be allowed to call “their” party crazy unless I could “defeat” them in a debate about economics.… and this person got upvoted for this, again. This to me signals, at least weakly, that there is way too much support on Less Wrong for the view that dissent against politics X should be culled. This worries me to say the least, since it skews Less Wrong politics in that direction.
Dissent against ANY politics should be culled. DISSENTING AGAINST POLITICS IS BAD FOR RATIONALITY. CHEERING FOR POLITICS IS BAD FOR RATIONALITY.
This is SUPER obvious because your dissent is just calling people crazy over and over, and saying it’s obvious that they’re crazy and you don’t understand how anyone could think they’re not crazy. YOU ARE MINDKILLED. You are not capable, or at least have not SHOWN yourself to be capable of dissenting against the politic you hate in anything like a reasonable fashion.
The point of this website is that lots of things that normal people take as obvious or intuitive are not in fact true, and based largely on their own biases. You seem to completely be missing this point in this and your other conversations about politics. So either do your research, come up with a refutation of objectivism based on actually reading it, or DON’T TALK ABOUT IT. Mentions of things you disagree with as crazy in an offhanded way is exactly what we don’t want.
I find it telling that you can’t commit to one of those two possibilities. Especially since my assessment is that you’re strawmanning Sophronius pretty hard.
Why does a rant with all capital letter shouting get upvoted to +9 by Less Wrong users? Calling me mindkilled and saying I can’t be reasonable and should go away and stop talking is both rude and unhelpful.
I’m probably just going to downvote you from here on out but let me respond one last time: I did NOT tell you to go away. I did NOT tell you to stop talking. Whatever you may have thought of my tone or ALL CAPITAL LETTER SHOUTING, my message is different. When you say “I think x, and those idiot followers of y disagree with me!” and I tell you to NOT say that last part, that is not the same as telling you to stop talking or go away.
If you look through my comments, you’ll see plenty end up negative and people yell at me for saying dumb shit. But what they’re saying isn’t GO AWAY or SHUT UP, it’s BE BETTER. Obviously if you refuse to understand this I do in fact want you to go away, but I hope that instead you’ll realize what you’ve been doing wrong.
Because it’s trying to tell you why you are getting the reaction you are, and people are agreeing with it.
Looks like it’s not working too well though.
Of course it’s not working! When has shouting at someone in all capital letters ever worked? I don’t even know what this person is trying to accomplish, other than being rude and telling me to shut up. It bothers me extremely that this viewpoint would get so much support.
The fact that you don’t understand is the problem. It is also the problem with your main post (I’m talking about the post specifically, not your particular political opinions.) The main post is being downvoted because you fundamentally misunderstand the purpose of the mindkiller sequence, and the political policy/preference of LW. In fact, your post is a prime example of why there is such a strong bias against political articles and discussion.
I wish I had a way to convey the information you’d need to figure it out. Some people just get it, some people just don’t get it. I haven’t spent much time thinking about why, or how to convey it, but my initial guess is that it wouldn’t be easy.
Maybe get you into looking at what people are actually saying and taking some time to come up with replies that actually address that content in a meaningful way, instead of just responding with cached thoughts like “Republicans are crazy” or “shouting is rude”. And to realize that coming up with thoughtful replies isn’t just a question of figuring out the correct etiquette, but requires skill and insight which you might not always have.
Because if you read the recommendations they are none of them objectionable though some may be mistaken if taken as moral injunctions rather than as guides to bear in mind. Your post otoh, is mealy mouthed misdirection combined with “Boo blues!”.
Don’t discuss politics, discuss policy, unless you’re aiming to overthrow the system because even if you devote your entire life to one singular policy goal, and get elected to your national Parliament your chances of achieving your goal is not great.
Part of the negative reaction to your post, I think, is that this came off as disingenuous. Everyone knows the party you think is crazy is the Republican Party. I understand the point you were trying to making is more meta than that, but it’s hard not to be wary of someone who wants to talk about politics when they lead in with the suggestion that a large fraction of Less Wrong is aligned with a crazy party.
There is a harm in talking about all these things at such an abstract level: it probably exaggerates the extent of actual disagreement. I don’t really have many hard-and-fast political views right now but if I take a political identification quiz I’ll usually end up listed as a libertarian (with slight movement to the left). But the content of my libertarianism is basically “society should do the things most economists think they should do”. There are a few other assumptions built into it but it has little to do with anything Ayn Rand talked about (and I’ve never voted for the party you think is crazy).
So I wonder if people might be more receptive to a post like “Hey, guys. I see a lot of you identify as X. It seems like part of X is believing Y. Y seems like it is obviously bad to me, so I’m wondering if those of you who identify as X could explain if they identify that way despite Y, or if they really believe Y. If you believe in Y maybe you could explain why it is not as crazy as it sounds to me.”
Hm, thanks for the feedback. You might be right that couching any criticism I have in a ton of fake humbleness might be necessary to make me seem less confrontational. I’d much rather be honest about what I actually feel, though.
(Rest of post retracted, agree with criticism)
Largely for the same reason that, when someone makes a comment about the downfall of social mores over the last fifty years, it’s going to get a comment about listening to conservative talk radio. There are certain ideas that—no matter how unique and individual your method of getting to them are—happen to have five or six decimal places worth of correlation with particular ideologies and listening to other particular sources.
Fake humbleness might be better than no humbleness. But I’d actually recommend a degree of genuine humbleness. If you’re not open to the possibility that the policies you support are the crazy ones and the people who you disagree with are right then I wouldn’t want you discussing politics on Less Wrong either. If you want to discuss politics just so you can correct the views of others, that sounds really terrible.
I think it strongly implies everyone on Less Wrong has a decent model of the average European’s politics and common political rhetoric in general.
Oh, but I am genuinely humble about things I am uncertain about. A lot of actual politics, such as economics, are sufficiently complex that I dare not have too strong an opinion about them. The same does not hold for evolution being real, or boys kissing being okay, or global warming being a thing, or a hundred of other things that people have somehow decided is political. I do not see why I have to pretend to be uncertain about subjects merely because someone said “it’s political now guys, everyone pretend you know nothing”. It frustrated me at school when I could not defend gay kids without being called gay myself. It frustrates me on Less Wrong that I cannot call certain views crazy without being called leftist.
If Less Wrong would admit that a lot of Republican held views are simply crazy, and fairly distributed criticism of craziness regardless of political allegiance so it’s not just Republicans that get criticized… I would be more than okay with this.
If one called religion “crazy”, one would be likely to be an atheist. And if an American lists a bunch of Republican views and call them all crazy, without doing anything similar about any Democrat-held view I’d consider them likely to be a Democrat (or a libertarian but, since you seem to dislike libertarianism, that’s unlikely in your case).
On my part, it frustrates me that you see calling certain views “crazy” as supposedly being dissent or an argument. No! Calling a different view “crazy” without any argument about why, is a status game—it’s an attempt to shut down dissent by deliberately lowering the status of the people that even attempt to discuss the issue.
i.e. they aren’t just in really strong disagreement with you (something which would put them on an equal level), they are insane wackos, and nobody sane could possibly hold any doubt about the issue, or worse yet defend the views, or worse yet share them. It’s an attempt to throw said views outside the Overton window.
On my part I’m actually sympathetic about such status games. I’m a progressive. I wish that e.g. neonazism in Greece had been destroyed, and same with lots of other vile crazy views. I don’t want to discuss with Greek neonazis, I want them utterly destroyed and thrown out of any political discussion completely.
But you seek the same about your political opponents (seemingly Republicans), while also seemingly denying you so seek it.
People on LessWrong, however, have the ability to recognize status games when they see them.
Right, so as ArisKatsaris says: calling something crazy without an argument is a status game. I suspect if you actually dived into the issues themselves you wouldn’t be that far away from even the most reactionary people on this site. I don’t think anyone here doesn’t believe in evolution. I don’t think anyone has strong moral issues with homosexuality—though you might hear some descriptive analyses of the cause and nature of homosexuality you might not like. “global warming being a thing” is trickier: since there are several sub-claims within it. Plus, this is a place where people like to question scientists. I mean, there is no way there aren’t large minorities here who disagree with scientific consensuses in nutrition science, pharmacology and psychology. Climate science needn’t be special.
But the issue is that you’re not actually arguing these points, you’re just waving a flag. You can tell just by the way you phrased them: “global warming being a thing” , “boys kissing being okay”. It shuts down discussion about these issues because you’ve construed them such that anyone who wants to, say, question the widespread hyperbole when Democrats discuss global warming or talk about how homosexuality can’t possibly be genetic now has to take a status hit as a result of taking the side you’ve construed has “crazy”.
That is a possible explanation, indeed.
Another possible explanation could be that everyone on LessWrong thinks that saying “my enemies are crazy” without providing specific arguments why is how Democrats typically speak. (Or perhaps that a Republican would likely use some other word, such as “godless” or “commie”.) In which case, it’s a simple logical deduction that if author speaks like a Democrat, his supposedly crazy enemies are most likely Republicans.
Yet another possible explanation could be that majority of American LW readers are pro-Democrats, therefore “crazy enemies” of a random person (in context of speaking of USA’s two major political parties, which excludes Libertarians etc.) are most likely Republicans.
I’m not endorsing any of these views here; just saying that all of these are plausible explanations why someone might guess you meant Republicans, and the other explanations are not evidence for Republicans being crazy.
A different example: If you meet a guy on the street and he starts talking to you about “inferior races”, are you able to guess whom he meant? Does your ability to guess correctly imply that you agree with him?
I will yield the point made by you and several others that yes, other interpretations are possible and in fact more likely.
I’m intrigued by your usage of “yield the point” in this context. Do you feel that the more likely interpretations proposed by others in this matter takes away something of value from you?
LessWrong isn’t terribly off-put by confrontation, it’s the idea that is voted on.
Again, mischaracterization of what I wrote.
My original post: http://lesswrong.com/lw/iqq/a_game_of_angels_and_devils/9tat
I suggest that’s one reason you’re downvoted—mischaracterizing what others say in a self serving way.
30% of LessWrong are liberals, 30% are socialists, and 30% are libertarian. 3% or so are conservative.
That’s the progressiveness of LessWrong showing—even if we stupidly use the sides in American politics (where libertarians are weirdly considered allied to the Republicans) that’d a 60% that would vote Democrat vs 33% that would vote Republican.
But I wouldn’t want to use the sides of American politics—the world is NOT the battleground for a fight between Republicans and Democrats and the stupid politicals alliances of America needn’t be our concern. Libertarianism is the ideology that says “stop throwing people in jail because they smoked marijuana”. I think that’s a very fine thing it says right there. Even finer than gay marriage (which I also support) btw.
And I’m saying this as someone who called himself a socialist in that poll. And who has voted for libertarians in the past also. If you’re seeing a right-wing bias in LessWrong, despite only 3% calling themselves conservatives, then you’re suffering from seeing everything through the prism of American parochialism where Only Two Sides Exist.
Libertarianism as defined in the LW survey question says more than that. I agree we should stop throwing people in jail because they smoked marijuana but I still answered “socialism”. (IOW it doesn’t generically refer to the bottom half of the Political Compass plane but to the bottom right quadrant specifically.)
As I already said above, I also answered “socialism”. My point was that “the stupid political alliances of America needn’t be our concern”. My own politics of interest are such that I consider libertarianism is allied to progressivism in the issues that I’m most concerned about—Sophronius however seems so focused on the American political alliances, that, because by historical accident rightwingers in America are currently allied to libertarians, he sees this as evidence of rightwing bias.
I saw those comments. they were of terrible quality and largely based on nothing but hearsay about Rand. They deserved to be downvoted regardless of your viewpoint.
Can you expand “deserved” here? I’ve never known karma to be about something I could classify as being “deserved” or not.
This Is almost certainly the chain of conversation Sophronius is referring to.
While I’m unaware of any official rules related to upvoting or downvoting individual posts or comments, karma’s primary use at this time is to act as a gate on posting threads to Main. I did not read that thread at that time (and have not voted Sophronius’ content there one way or the other), but it’s pretty much the precise sort of stuff I would prefer not to see on Main.
As a Main post or as comments in the Main section?
The content that started the chain was a post, rather than comments to a post; I linked to the comment chain by another poster that quoted the particular relevant sections, for simplicity and clarity.
Honestly, the content of the text I don’t think I’d like to see even as a comment in the Main section—it’s basically a burst of preaching to the crowd, except the crowd here won’t even know most of the point—but Karma only controls the content of posts in the Main section and is thus most relevant in that context.
I don’t consider the comment section useful or relevant in any way. I can see voting on articles being useful, with articles scoring high enough being shifted into discussion automatically. You could even have a second tier of voting for when a post has enough votes to pass the threshold into Main for the votes it gets once there.
The main problem with karma sorting is that the people that actually control things are the ones that read through all content, indiscriminately. Either all of LessWrong does this, making karma pointless, or a sufficiently dedicated agent could effectively control LessWrong’s perception of how other rationalists feel.
I’m sorting by Controversial for this thread to see what LessWrong is actually split about.
In this case, the content was already in a post.
Mechanically, I’m not sure how you’d handle automatically upvoting articles into Discussion: people do that by hand often, but they have to do it by hand because most contents lose usefulness and sometimes even readability when pulled from context.
((At a deeper level, it’s quite easy to imagine or select posts that belong in Discussion or solely as comment and will quickly get high Karma values, and just as easy to think of posts that belong in Main but shouldn’t have anything that would make folk upvote them to start with.))
At least at this point, it’s easy enough (and often necessary enough) to change Sorting regularly just to find an article more than once, so I’m not sure sorting is the most meaningful part of Karma. The ability to prevent posters from regularly creating Main articles seems more relevant, and a number of folk at least treat Main articles more seriously.
That was me! And it’s you again! I should have known!
Although you’re mischaracterizing what I said. Again. Though I’m not surprised, as it was your modus operandi the last time we spoke. First of Rand, then sympathizers of Rand, then me, then the LW community as a whole.
For anyone who wants to claim that my characterization here is unfair, I invite them to read the original thread and get back to me if they still think so: http://lesswrong.com/lw/iqq/a_game_of_angels_and_devils/
You continue true to form here:
I’ll give you another hypothesis. You’re getting the response you’re getting because you’re screaming that you’re an internet crank at the top of your lungs. And I’m guessing that many of those downvotes are coming from the Progressive side of the field. Maybe most, as another guess is that the Progressives are more intent on driving cranks out of the LW community than the Libertarians are.
Do you really fail to see how your last was the usual ridiculous posturing of the internet crank who can only see disagreement with him as a moral and intellectual failure of others, and then tell it to them, like they’re going to believe it and be impressed by it?
And your initial post here was condemning the Progressives here for not condemning the Libertarians loudly and viciously enough.
Did you expect gratitude from that self supposed keen insight?
By the way—responding 25 times in a thread? Crank crank crank.
The only thing missing are the red and green flashing gifs.
Saying that Ayn Rand is crazy is contains no useful political information that helps someone who reads your post to update his map of the world in a productive way.
Saying Ayn Rand is crazy is no criticism of Ayn Rand. It might be defamation.
The post basically says that you should judge someone rationality by he willingness to believe in scientific authorities and signal that belief instead of judging his rationality by direct empiricism or by choosing effective strategies that help him win.
One of the core Lesswrong dogma’s is that rationality is about winning. The post basically disagrees and doesn’t explain why he disagrees.
How do you plan to tell who downvoted you, and why they did so??? Doesn’t look like very sound experiment design to me.
I don’t need to, all I need to examine is whether I suddenly get a huge influx of downvotes after this. This happened before when I off-handedly mentioned Ayn Rand as an example of a crazy person. If it happens again, it’s weak evidence in favour. You’re right though, the lack of control and analysis makes this mostly just a joke hypothesis.
Edit: Oh there we go, I’m at −12 now. It didn’t seem to happen until after I mentioned Ayn Rand though, so maybe it’s exclusively mentioning her that somehow causes people to go off their rocker. Or possibly it’s entirely unrelated, but still.
Edit 2: And now I went from −12 to +20 within minutes. I guess the other team just arrived? This is actually pretty funny to watch. It’s a bit like a football match, only my ego is the ball.
Edit 3: And now I am at −5 again. Looks like some people just downvoted all my posts in this thead again. I wonder if there is any pattern behind these waves of up-and-down voting, or if it’s just statistical clustering.
I think of Rand as someone who took a few steps outside the consensus and found both true and false things there. She wasn’t simply crazy.
Your ego is entangled with your karma on LW?
What a poor choice.
Wait, what?
At the moment, this comment is at −3 with 33% positive… which implies it has gotten 3 upvotes and 6 downvotes; 9 votes total.
This is not strictly inconsistent with it going from −12 to +20 to −5… it’s possible that 6-12 of the initial downvoters retracted their votes while 20-26 new voters upvoted, and then 17-23 of the upvoters retracted their votes and 0-6 new voters downvoted, but this seems so implausible as to not be worth taking seriously.
I assume I’m just misunderstanding you. Are you perhaps comparing time-stamps of your overall karma and deciding that this comment is the cause, rather than anything (or everything) else you’re posting at roughly the same time?
He’s probably talking about either votes on his top-level post, or maybe his 30-day karma. Doesn’t matter much.
Please link to the comment so people can verify the context for themselves.
The comments above include suggest this as the thread under discussion.
That link is not provided by the OP, though, so it’s possible they meant something else. OTOH, Googling site:http://lesswrong.com “Ayn rand derangement syndrome” only turns up that thread, so it seems likely referent. (To my amazement, removing the site parameter still only turns up that thread, which seems implausible… is this some kind of automatic Google-tuning? Do others get the same result?)
For my own part, I think a charitable reading of the OP’s summary is close enough to accurate, but in the context of their comments more generally I’m no longer willing to extend them the benefit of the doubt implied by a charitable reading.
I’m getting the same results. It’s a relatively recent neologism (~10 years), and most uses focus on modern political leadership or modern organizations, which may be why.
Still, a surprising Googlebomb.
Right. It’s those damn greens. Damn those greens, with their votes for… crazy green things! Not like us blues, who want nothing but good and rational blueness!
[ETA] My mind has been killed. This is why I don’t want party politics—as opposed to policy—on LessWrong.
Couldn’t you instead exercise self-control?
Don’t worry, the last crazy post on politics I saw was voted down to −10
The last survey compared to the previous one showed a big shift in favor of socialism away from the libertarian/liberal founding crowd. But I’m pretty that isn’t what you mean. What is actually happening is that people are noticing the initial demographics had views that are at odds with the average views of the New Atheist cluster (with which we are converging) and this bothers them.
How about very progressive and very libertarian people having a less wrong discussion about politics?
It would be nice—I think there’s significant overlap between libertarian and progressive ideas on drug legalization, immigration, and probably other issues, but each group has built up a huge ugh field around the other.
I suggest that such the ugh field is largely one way, of Progressives for Libertarians.
One example. Years ago I bumped into a Progressive fellow from my freshman college dorm who I remembered having your typical freshman dorm political arguments with. On seeing and recognizing each other, I say hi and reach out my hand to shake his hand—and he refused. Ha! What an ideological ass! Yet it still cheered me to see him, though he apparently still gnawed the bones of his ideological resentments decades after we had last seen each other. Progressives think nothing of showing such personal resentment and animosity toward those they disagree with. Indeed, as our current poster demonstrates, they often consider such public displays of hostility a feature, and not a bug.
Libertarians are too much an ideological minority to hate unbelievers with much fervor—we’d have to hate most everyone. We don’t have the luxury of living a life in an intellectual walled garden where we can get away with the same kind of venom, loathing, and intolerance. If we couldn’t divorce the memes from the meme carriers in our minds, we’d all be in prison for mass murder.
For example, I think the Bible is a moral abomination. But I often quite like serious religious believers, and end up gravitating toward them. Similarly, one gal I know started with plans for the seminary, then became a Marxist, and now is a Wiccan Progressive dirt over humans tree hugging Socialist. Likely she’s a utilitarian as well. She’s endorsed most every ideological horror common in western societies, but I like her just fine. In fact, I feel a bond to her because of her lunacy.
For me, the salient division isn’t between what people profess to believe is true, but whether they seem to care about what is true.
...
If you are genuinely interested in that dialog you shouldn’t use language like this (Edited for unnecessary harshness). Come on, you know how progressives reading that first sentence will react. You basically describe them as heartless. You know that even the exceptions to your statement will take it personally. Why phrase it that way? You could just as easily write: “Since there are so many progressives it is easy for them to isolate themselves from ideological opponents and avoid considering that possibility that libertarians aren’t evil. The fraction of libertarians open to that conversation is much higher.”
With the second sentence why “loathe”? Why load your mental model of progressive policies with negative emotional valence? That kind of stake-raising is exactly why conversations about politics are so hard. Surely “disagree” or “believe misguided” convey your position without telling progressives you think their ideas are loathsome. “Remake the world to efface their effects” is scary, stake-raising language too. I understand that it doesn’t literally mean anything other than “I would like to replace progressive policies with libertarian policies” but words have connotations and imagery.
Do I? First, I doubt that projecting their emotional reaction was foremost in my mind.
And no, this is not all Progressives, all the time. Context of my comments—Progressives have a greater Ugh field for Libertarians than Libertarians for Progressives. Which I think is true, and I think shows up as them displaying more of the behavior in my generalization than LIbertarians.
No. Plenty of heart. But a human heart has more emotional range than Barney’s.
Because I have values. The negative emotional valence comes as a by product of those values when confronted with things that contradict those values.
HPMOR:
Yet another moment where I cheered Harry.
And in this case, the emotional valence was particularly relevant to my point: it’s not just that you don’t have to hate people with ideas you disagree with, but you don’t have to hate people whose ideas you hate. And, you don’t have to hate them if they hate your ideas either.
Your suggestion seems the be that the latter is too much to hope for.
Yes, and in this case they’re relevant to the point.
Seems like it should have been somewhere in your mind. I mean, I guess if you were just complaining to other Libertarians it’s fine, but it seems like the productive audience for your comment would be progressives.
Don’t disagree.
I have values too. They result in negative emotional valence for bad things happening to people. And inevitably they leak over a bit into the policies and people I think cause those bad things. But I do my best to hold the tides back and keep my values judgments out of my policy analysis. That way I can change my mind on policy if I hear new arguments or learn new information. I don’t think that’s the same as Yoda’s poor advice.
I have no idea if it is too much to hope for or not. How is it going so far? It would be great if political discourse lived up to your ideals—but why not make it easier for everyone?
Analysis disconnected from values sounds rather pointless to me. Particularly in politics. The first step in good faith negotiation is a communication of values. If I don’t clearly communicate my values, how is a Progressive supposed to come up with an argument to satisfy them?
I’m trying. The goal isn’t yet another pointless political discussion, or talk for talk’s sake.
If they don’t know my values, the discussion will be unproductive. If knowing my values means they can’t have a productive conversation with me, then we won’t be having a productive conversation. End of story. The only people I might have a productive conversation with are people who can talk to the enemy.
Further, having to self censor information flow does not make the conversation easier for me. In fact, it doesn’t make it easier on anyone. It’s a cost, an impediment, a friction in the exchange in information. This is where I disagree with Crocker. Why should I have to pay that cost, if I’m not requiring it in others? Two people playing by Crocker’s rules can get things done.
On a more personal note, I find people who require their tender feelings to be stroked and soothed 24⁄7 tiresome. If soothing their feelings requires me not being honest about mine, I find it even more tiresome. No doubt they find me tiresome too. Fine. I’m not an appropriate playmate for those people. And they aren’t for me. I can live with that.
Also, I find the culture of offensitivity highly manipulative. Hurt feelings become a trump card to stifle expression of opinion. It’s the new blasphemy. I’m not interested in playing that card, particularly in a political discussion on the web, and see no compelling reason to consent to having it played on me.
I’ve had this discussion before on LW. It’s admittedly a trade off, and one that varies by personality.
But in the case of radically opposed political views, demanding that one side refuse to fully communicate their position strikes me as a non starter. Saying that I “disagree” really isn’t communicating. I find the proposed system grotesque, and the moral foundations an abomination. IMO, one of the problems with those on my side of the argument is that they don’t question the moral premises of Progressivism. Another problem is that Progressives do their best not to hear them. One of the immunizing strategies of the majority is refusing to talk to the Devil.
Can you unpack “grotesque” and “abomination”? When people use words like that I mostly understand them to be conveying disagreement, along with the desire to rile people up in unproductive ways, but I understand you here to be claiming to have different goals than that. I’m not sure what they are.
Disagree really isn’t right at all. I disagree that 2+2=5. Progressivism is a set of values and programs to implement those values that runs counter to my values. Strongly counter to my values. I’m not disagreeing, I’m disvaluing.
For my own part, I have no difficulty talking about people disagreeing over values, but I’m content to talk about people having values that run counter to each other’s values instead, if you prefer that.
So… when you call a system “grotesque” or a moral foundation an “abomination,” you’re conveying that your values run strongly counter to it? Did I understand that right?
Well, I’m not Spock tallying up a spreadsheet of values, so another part of what I’m communicating is my emotional reaction, and the intensity thereof. And indeed, that my reaction is a moral reaction, with some of the associated multi-ordinal punishing and disapproval characteristic of moral reactions. Though in this case, not punishing as much as a withdrawal of goodwill and a will to protect when they get screwed by the systems they advocate.
Grotesque and abomination also connote the twisted evil of the systems. One example. The poor who are supposedly so cared for are systematically punished if they take actions to improve their situation. Get a job, and face effective marginal tax rates, counting government benefits, often in excess of 100%. Find a partner to share the burdens of life, and likewise lose benefits.
Not just harmful, but a perverse and twisted harm, punishing someone for trying to do the right thing and improve their lot in life. When the “unintended consequences” of the system look similar to what a sadist would do who was trying to cripple people, I think “grotesque” and “abomination” applies.
So “grotesque” and “abomination” are meant to convey that the other side is not only incorrect, but also to express your moral judgment of the other side’s position as twisted, evil, and perverse, and also to express your withdrawal of goodwill from the individuals who hold that position, and your reduced willingness to protect them from certain kinds of harm (specifically, from harmful consequences of that position).
Do I have it right now?
No, not right.
The same issue as “disagree”. 2+2=5 is incorrect. I’m not saying that their position is incorrect. Clippy isn’t “incorrect” either.
Both the loss of goodwill and willingness to protect are contextual on the same types of situation, while I read what you wrote as making the loss of goodwill general.
OK.
Analysis that is connected to values doesn’t have to mean embedding the values in the analysis. Pick a policy. Talk, in neutral terms, about what you think it will do. Then express how you feel about those impacts. Then the progressive you’re talking with can say “oh, I don’t care about that impact at all” or “I certainly care about that impact but disagree that the policy does it.” You can’t have conversations about terminal values. You can have them about policies which is why you have to take terminal values out of your conversations about policies.
Right, so clearly express those values. But don’t attach the values to progressive policies. If it is the policies themselves that you loathe, how is a progressive supposed to argue for them?
I think this is probably wrong for (most) humans. We’re immediately distracted by status signals and emotions. Once the conversation is about that that is all it’s about.
Most people can’t play by Crocker’s rules. I’m not even sure the people who say they play by Crocker’s rules do all that well.
Fair enough.
To be clear: if you had said “I loathe libertarian policies” I would have made the same objection. Both sides ought to lower the stakes.
This is interesting and I would be interested in hearing you expand on them. Part of why your language seems unnecessary to me is that I’m somewhere between a libertarian and a Progressive and I don’t see any differences in values so much as I see Progressives not understanding how incentives work.
Sure you can. You can explain yours to the other guy, and likely discover something about them yourself in the process.
I agree about the possibility of discussing the likely outcomes of a policy divorced from the valuation of the policy. But the valuation provides both the motivation for the discussion and the punchline to it.
He argues by showing me how I am mistaken or not fully aware of things entailed by the policy that I would value positively.
I suppose for some people. But since I think the valuations are an important part of the conversation, if those people can’t do valuations and objective analysis, they won’t be very fruitful partners in the discussion.
Nope. Both sides should be as clear as they can about what the stakes are. I think that’s what’s missing. Here are my values. Here’s why I loathe your policies. Once put on the table, I think there is some hope of setting aside for moments and doing the objective analysis. But until honestly confronted, I’d expect the “objective” discussion to be polluted by both attempts to insert them, and interpretations on the look out for them.
There aren’t, necessarily. But I think statistically, there are.
I don’t think they care to understand. It’s not rocket science. They are motivated by something other than achieving outcomes. Some people want to do something. Some people want to be something. I think they tend toward the latter.
That’s an interesting statement. Would you mind expanding on it?
Preferences, likes, dislikes. I prefer A to B. I value A more than B.
I wish I could agree with you, but I run into (Reaction-influenced?) libertarians who conflate liberals, progressives, and Stalin.
When you say conflate, what do you mean?
Normally, one conflates ideas, and not people. Do these libertarians see similarities between Progressivism and Stalinism, or do they have the same emotional reaction to Progressives that they have for Stalin?
As nearly as I can figure it, they think that the ideas held by progressives and liberals are so similar to state communism that anything faintly leftish is on the short route to genocide.
Or at least the Road to Serfdom.
People failing to see degrees of a problem, is a problem. But notice that you’ve identified hatred of ideas, though unfairly exaggerated, and not hatred of people, or even attributing malicious intent.
Even Hayek, who had some of that all or nothing attitude, didn’t even attribute malicious intent to Socialists, to whom he dedicated The Road to Serfdom.
Me, I’ve started to question intent more closely, as I see the theocratic impulse to force others to live by your values as one of the worst possible intents.
What you describe sounds pretty common among people who are either capable of detaching emotionally from the topic at hand, or who lack emotional investment to it in the first place, as a description of people who aren’t and don’t.
There’s a lot of different labels that can be applied to this condition depending on how one wants to frame it; rather than get into framing wars I’ll just label it X for convenience. And X is hardly limited to how libertarians view progressives, of course.
All of that said, you may be right that libertarians are more likely to demonstrate X than progressives are. You might further be right that this is because libertarians are too much of a minority, because they lack the luxurious walled-garden privileges that progressives enjoy, because they would otherwise commit mass murder, etc.
That said, I would caution against inferring any of that with significant confidence solely from personal experiences.
I would also caution against describing the situation the way you do here unless your intention is to upset progressives who lack X.
That said, upsetting people in this way can of course be a very effective way of maneuvering for status, as the people you upset will typically express their emotions, which in a social community like this one allows you to roll your eyes and dismiss them with community support. If that sort of social status maneuvering is your goal in the first place, then of course neither of those cautions apply.
Not as goal in itself, but it appears to frequently be a necessary first step to getting progressives to the point where it’s possible to have a reasonable discussion.
I’m sure my generalizations would be wrong for lots of individual Progressives and Libertarians, but that won’t make me dismiss my conclusions from my lifetime of observations. What can we reason, but from what we know?
I’m a libertarian, loathe Progressive doctrines, and would remake the world to efface their effects from existence. If they’re Progressives and lack X, I don’t see a way to sugar coat those facts that will make them happy. Do you?
Yes, and framing someone’s statements as intentionally upsetting people to maneuver for status is effective in maneuvering for status with some people too.
We can’t. But we can do things that increase the reliability of what we know.
Nope.
Absolutely.
People here generally put in their due diligence in constructing reasoned well-sourced arguments, tend to admit when they were wrong on something and most importantly force you to look at your own assumptions. That puts it leaps and bounds over any other political debate I’ve ever seen.
If a group of economists and other relevant experts were polled on various policy matters and compared to the answers reached by LessWrong users if policy was discussed on LessWrong, how much do you think the answers would differ, and would it be the experts’ errors or LessWrong’s errors causing whatever discrepancies?
This depends a lot on the proportions of the economists that are from each school and what constitutes “other experts.”
If you get a representative slice of economists, I’d expect much better predictive value on any given policy than LW because a) they have a quite a bit more expertise dealing with the models and b) we’ve got a lot of Austrian School / Prediction Markets people here to skew away from the consensus. If you weight them too heavily to any one school, especially the really funky ones, then you might see us start to pull closer but I still doubt the gap would close.
If you get a representative slice of Sociologists, PoliSci folks, X-studies professors and whatever other political talking heads you can find in academia and put them in a room together, it’ll disprove the notion of a just universe when the building isn’t hit immediately by an asteroid. And they’ll also be wrong much more than LW, although we still wouldn’t beat any of the handful of real scientists hiding in the back (Anthropologists / Social Psychologists mostly).
I’m not saying we’re some kind of amazing truth engine here, just that this is an abnormally reasonable environment with a lot of abnormally smart and well educated people.
I think in general, people here are much too willing to take contrarian viewpoints. That is an interesting hypothesis, though.
I get what you mean by Less Wrong’s willingness to take contrarian viewpoints. However, what you need to remember is that these viewpoints do not seem contrarian within the less Wrong community. Taking Cryonics seriously for example is considered normal here, even by those who don’t attach a high probability to it working. And the reason for that is that Yudkowski has declared this view to be mainstream within Less Wrong. Similarly, Yudkowski has declared politics to be mind-killing, so that subject is off-limits. My issue with this is that in deciding what is and isn’t “normal” or “up for debate”, 90% of the debate has already been decided. And any post that disagrees gets downvoted to −6 and shut down, because “we don’t talk about that around here.”
Given this, I think I can see why Less Wrong has a reputation for group-think.
There is lots of political debate amongst lesswrongers “just around the corner” in various personal blogs and websites. So I’m not convinced the fact that it is taboo here is the cause.
I am not convinced either. I think my explanation is plausible, but it’s certainly not the only plausible explanation. However, I certainly think it’s important to pause and reflect about this.
I also think it’s very worrisome that any posts criticizing Less Wrong get down voted (Unless you’re Holden and you spend about 10 pages praising Less Wrong first and couch all criticism in disclaimers).
How confident are you that this is actually true?
Literally speaking not at all, since it was an exaggeration. 10 pages of praise is clearly not necessary.
That said, I strongly believe that posts containing criticism of Less Wrong on average get many more downvotes (and less upvotes) than posts which remark on how great Less Wrong is. For example, I have seen “joke” posts on how Yudkowski is god that get about +50 points (was a while ago, would need to check to confirm memory). On the other hand, every time I post a criticism of Less Wrong, it gets a lot of downvotes (though usually some upvotes as well), and as for criticism posted by other people.… well I don’t see a lot of that, do you?
Maybe your criticisms of Less Wrong just aren’t all that well-reasoned. Plenty of Less Wrong criticism gets upvoted here. The most-upvoted post of all time is a criticism of MIRI, and several of my own most-upvoted comments are direct criticisms of Eliezer, e.g. this and this. See also this much-upvoted post.
Thanks for the reply. When you suggest that maybe the problem is on my end, are you really just offering that as a mere possibility, or do you believe that that is actually the case? I’m asking because while it is of course entirely reasonable to say that the fault lies with me, nobody as of yet has told me what specifically is wrong with my posts (other than: “not enough facts”, or: “You sound left-wing”). If the latter is the case, please tell me what specifically I could improve.
The first post you link to is the one by Holden that I specifically referred to above as the only type of criticism that does get upvoted. The reasons for this are varied:
1) Holden is high status: Nobody is going to tell Holden to shut up and go away (as I’ve been told to) because the mere fact that he is taking the MIRI seriously is good for the MIRI and Less Wrong.
2) Holden is exceedingly polite and says nothing that could even be taken as an excuse to be offended
3) Holden goes out of his way to praise Less Wrong as a community, which of course makes people here feel good.
4) Holden has spent a ridiculous amount of time and effort writing and supporting that exceedingly lengthy post, well beyond normal standards.
5) Holden doesn’t actually say anything that is considered Taboo here on Less Wrong. His post defends the proposition that donating to MIRI isn’t the best possible expenditure of money. That’s hardly going to rile people up.
Holden’s post is the equivalent of James Randi going to a dowser’s forum, and writing a 10 page thesis on why he thinks dowsing isn’t 100% effective, while repeatedly saying how he might be wrong, and he really wants to be able to change his mind, and isn’t the idea of dowsing wonderful and aren’t dowsers great people. Of course the dowsers would be very happy with a post like that: it only validates them to have something like James Randi say all that. This does NOT mean that dowsers are all rational individuals who are happy to receive criticism of their ideas.
The same point holds for your own posts criticizing Eliezer, albeit to a lesser extent. And again, criticizing Eliezer is not taboo here. Criticizing Less Wrong itself, more so.
I agree that Yudkowsky hero worship is extremely creepy and should stop.
Fair enough. What’s the most recent example of Yudkowsky hero worship you’ve observed here?
(nods) That’s a far more defensible statement. It might even be true.
I’m not sure what you mean by “a lot”. I’ve seen more criticism of LessWrong here than I’ve seen criticism of RationalWiki, for example, and less than I’ve seen criticism of the Catholic Church. More than I’ve seen criticism of Dan Dannett. I’m not sure if I’ve seen more criticism of Less Wrong than of Richard Dawkins, or less. What’s your standard?
We could instead ask: should there be more of it? Should there be less? I suspect that’s a wrong question as well though. Mostly, I think the criticism should be of higher quality. Most of what I see is tedious and redundant. Of course, Sturgeon’s Law applies in this as in everything.
All of that said, if I were to list off the top of my head the top ten critics of LessWrong who post on LW , your name would not even come up, so if you are attempting to suggest that you are somehow the singular contrarian voice on this site I can only conclude that you haven’t read much of the site’s archives.
There is also more criticism of Less Wrong here than there is criticism of people who think that the world is run by lizard-people. This is because Less Wrong is more relevant to Less Wrong than Lizard-people, not because the lizard-believers are actually considered more credible.
The only reasonable standard to me is comparing the amount of criticism with the amount of praise. I see much more posts talking about how great Less Wrong is than I see criticism of Less Wrong. More worryingly, the criticism of Less Wrong that I do see is on other forums, where it is widely agreed that Less Wrong is subject to group think, but which is summarily ignored here.
I assume you aren’t actually suggesting that RationalWiki, the Catholic Church, Dan Dannett and Richard Dawkins are as irrelevant to Less Wrong as lizard-people. I picked a few targets that seemed vaguely relevant; if you think I should pick different targets, let me know what they are.
Why is that? This doesn’t seem true to me at all.
Why does this worry you?
This might be true. Can you unpack what you mean by “group think”? (Or what you think those other people on other forums whom you’re reporting the statements of mean by it, if that’s more relevant?)
No, I am saying that comparing criticism of Less Wrong with criticism of other websites/people is not a valid metric at all, since the total amount written on the subject differs between each. You can’t look at absolute amounts of criticism here, it has to be relative or merely the total amount of posts would determine the answer.
It worries me that a lot of the criticism of Less Wrong is made outside of Less Wrong because this indicates that the criticism is not accepted here and Less Wrong exists in a bubble.
The exact criticism of Less Wrong usually isn’t very good, since people tend to not spend a lot of time writing thoughtful criticisms of websites that they aren’t affiliated with. It usually amounts to “gives off a bad vibe”, “uses their own little language”, “Copies Yudkowski in everything they believe” or “Disproportionally holds extreme views without thinking this is odd.” All of this indicates what I call group think, which is the act of paying too much attention to what others in the in-group believe and being isolated from the rest of the world.
All right. Thanks for clarifying.
You realize this is still true if one replaces “Less Wrong” with any other community.
Which would mean there is no genuinely rationalist (inviting updates) community anywhere,
How specifically would it mean that?
Imagine that you have a community X, which is perfectly rational and perfectly updating. (I am not saying LW is that community; this is just an example.) Of course there would be many people who disagree with X; some of them would be horribly offended by the views of X. Those people would criticize X a lot. So even with a perfectly updating super rationalist community, the worst criticism would come from outside.
Also, most criticism would come from outside simply because there are more non-members than members, and if the group is not secret and is somehow interesting, many non-members will express their opinions about the group.
Therefore, “a lot of the criticism of Less Wrong is made outside of Less Wrong” is not an evidence against rationality of LessWrong, because we would expect the same result both in universes where LW is rational and in universes where LW is irrational.
You write “so”, but that doesn’t follow. You are tacitly assuming that a community has to be held together by shared beliefs, but that does not match genuine rationality, since one cannot predetermie where rational enquiry will lead—to attempt to do so is to introduce confirmation bias., You also seem to think that the “worst” criticism is some kind of vitriolic invective. But what is of concern to genuine rationalists is the best—best argued, most effective—criticism.
If the group is discussing specialised topics, then good criticism can only come from those who are familiar with those topics.
You are still missing the point that a genuine rationalist community would invite criticism.
How specifically?
For example, should we ask all the critics from outside to publish an article on LW about what they think is wrong with LW? Do we also need to upvote such articles, regardless of their merit? Do we also have to write supporting comments to such articles, regardles of whether we agree with their points? Do we have to obsess about the same points again and again and again, never stopping? … What exactly should a community do to pass the “invites criticism” test?
Why not? Your other comments are strawmen. But inviting opposing views regularly happens on, eg acaemic philosophy.
Thank you for the specific suggestion!
I made the strawmen suggestions because I wasn’t sure what was your point, and I wanted to have also an “upper bound” on what the community is supposed to do to pass the “invites criticism” test. Because defining only the lower bound could easily lead to later responses of type: “Sure, you did X, Y and Z, but you are still not inviting criticism.”
The simplest solution would be to contact people already criticizing LW and invite them to write and publish a single article (without having to create an account, collect karma, learn markdown formatting, and all other trivial inconveniences), assuming the article passes at least some basic filter (no obvious insanity; claims of LW doing something backed up by hyperlinks). There is always a possibility that we would simply not notice some critics, but it can be partially solved by asking “have you noticed any new critic?” in Open Thread.
Somehow I don’t like the “behave like a dick and be rewarded by greater publicity” aspect this would inevitably have, since the most vocal critics of LW are the two or three people from RationalWiki whose typical manner of discussion is, uhm, less than polite. But if we don’t choose them, it could seem from outside like avoiding the strongest arguments. Let’s suppose this is a price we are willing to pay in the name of properly checking our beliefs—especially if it only happens once in a long time.
Seems like a good idea to me; at least worth trying once.
I guess the invited opponents in this situation are other academical philosophers, not e.g. a random blogger who built their fame by saying “philosophers are a bunch of idiots” and inserting ad-hominems about specific people.
So if we tried in a similar manner to speak with the polite equals, the invited critics would be people from other organizations (like Holden Karnofsky from GiveWell). Which kinda already happened. And it seems like not enough; partially because of the polite argumentation, but also because it only happened once.
Perhaps what we should aim for is something between Holden Karnofsky and our beloved stalkers at RationalWiki. Perhaps we should not ask people to express their opinion about whole LW (unless they volunteer to), but only about some specific aspect. That way they wouldn’t have to read everything to form an opinion (e.g. someone could review only the quantum physics part, ignoring the rest of the sequences).
Do you have a specific suggestion of people that could be invited to write their critism of LW here?
Your article would have been a lot more well received if you hadn’t mentioned LessWrong so much in your last main paragraph. If you had subtly avoided calling direct fault on LessWrong, I think this could have have been very well received. Just look at the comments here. Despite the karma on the article, this post is getting a lot of attention.
I’ve been probing LessWrong’s reactions to various things since Inferential Silence motivated me to bother with LessWrong. I can give you a discrete bullet point list of what LessWrong likes, loves, and hates. It’s hard to pinpoint the groupthink because the one special topic that LessWrong “never” disagrees on is so hard to find. You’re perfectly allowed to disagree with cryonics, SAI, Yudkowsky, you can discuss politics if you’re quiet about it, you can discuss any of a number of things and not suffer intense downvoting so long as you express your thoughts perfectly clearly. In this way LessWrong skillfully avoids noticing that it is participating in groupthink.
So what is it? It’s simple, recursive, ironic, and intensely obvious in hindsight. LessWrong is the focus of LessWrong. It’s not any given subject, topic, person, or method. It is the LessWrong collective itself. LessWrong is the one thing you cannot hate, while also being a part of LessWrong. To challenge LessWrong is to challenge rationality. Challenging Yudkowsky? Sure. Not like he’s the avatar of rationality or anything. Go ahead, disagree with him. Most people here disagree with him on some subject or another. I’m probably one of the few people that does understand and agree with Yudkowsky nearly entirely. The best advice I could give LessWrong is that, if it were down to the two of them as to which was a better fit for being the avatar of rationality, it is Yudkowsky. LessWrong disagrees. LessWrong is totally content to disavow credence to Yudkowsky. No, in LessWrong’s view, the title of avatar of rationality belongs to itself. Not to any particular person in the collective, but to the collective itself. So long as you avoid hitting that node and make your thoughts clear in LessWrong’s memetic language, you’re fine. Fall outside that boundary? Hell no. Not on LessWrong. Not while I’m still here. (For each member of the hive mind in turn.)
There is a counter argument here, in your and other’s increasing disallegiance to LessWrong. The problem is that most of you aren’t equipped to skillfully reform LessWrong, so you just end up leaving and the problem goes ignored. The effectively “removes” you from the hive, so despite that you hold the counter-stance, you’re not really part of LessWrong to the point where it can be claimed that LessWrong values anything other than LessWrong. Well suppose you don’t leave LessWrong. Despite your contrary view, you can barely identify the problem enough to voice it, making you question if your viewpoint in rationally legitimate. Right now you’re on the border of LessWrong, deciding if you’re going to be a part of the collective or not. In this way, you can construct a very precise measure of LessWrong with a small bit of introspection.
Judging by the reception of your comments here, I’d say you’re well equipped to speak the LessWrong language, so all you need is sufficient understanding of the hive’s mind to begin reforming it. I’d further suggest starting with something other than the ban on politics, but if this was the subject you picked, then I must assume you’re not hive-aware (compare to: self-aware) enough to formally recognize the other flaws.
I am not sure I follow your argument completely. It feels to me as if you suggested that discussing everything, as long as it is polite and rational, is the proof of LessWrong hivemind.
Well, I would call that “culture”, and I am happy to have it here. I am not sure what benefit exactly would we get by dismantling it. (A well-kept garden that committed suicide because it loved contrarianism too much?) I mean, it’s not like none of us ever goes beyond the walls of LessWrong.
But then you say it’s okay to criticize anything, as long as one doesn’t criticize LessWrong itself. Well, this is from article “Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality”, having 92 karma at this moment:
So, what kind of observation specifically does your hypothesis disallow?
Trying (admittedly only for a very short time) to steelman your position, I’d say the “dogma” of LessWrong is that having an aspiring rationalist community is a good thing. Because LW is an aspiring rationalist community, so obviously people who think such community is stupid, filter themselves out of LW. In other words, the shared opinion of LW members is that LW should exist.
Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn’t realize it at the time, I wasn’t actually advocating dismantling it, or that it was a bad thing to have at all.
That’s the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you physically do, but there are strict procedures and social processes in place that prevent planting outside seeds in the fertile soil within the walls. When you come inside the walls, you quarantine yourself to only those ideas which LessWrong already accepts as being discussable. The article you link is three years old; what has happened in the time? If it was so well-received, where are the results? There is learning happening that is advancing human rationality far more qualitatively than LessWrong will publicly acknowledge. It’s in a stalemate with itself for accomplishing its own mission statement; a deadlock of ideas enforced by a self-reinforcing social dynamic against ideas that are too far outside the very narrow norm.
Insofar as LessWrong is a hive mind, that same mind is effectively afraid of thinking and doing everything it can to not do so.
I wouldn’t mind seeing some of the ideas you think are worthwhile but would be rejected by the LW memetic immune system.
That’s odd and catches me completely off guard. I wouldn’t expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.
With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You’re risking becoming a “carrier” of the ideas here.
Why don’t you just post them explicitly? As long they don’t involve modeling a vengeful far-future AI everyone will be fine. Plus, then you can actually test to see if they will be rejected.
Why are you convinced I haven’t posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?
Let’s say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”)
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
I’m just trying to encourage you to make you contributions moderately interesting. I don’t really care how special you think you are.
Wow, what an interesting perspective. Never heard that before.
See, that’s the kind of stance I can appreciate. Straight to the point without any wasted energy. That’s not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize this...?
Worst effect of having sub-zero karma? Having to wait ten minutes between comments.
Not sure if sarcasm or...
Sarcasm.
We get the “oh this is just like theism!” position articulated here every ten months or so.
Those of us who have been here a while are kind of bored with it.
(Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)
What, and you just ignore it?
No, I suppose you’ll need a fuller description to see why the similarity is relevant.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don’t need an entire ecosystem on board? If complex information processing nanites aren’t feasible, is reanimation? These concepts aren’t new, they’ve been around for ages. It’s Magic 2.0.
If it’s not about evidence, what is it about? I’m not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It’s not something people are believing in because “it only makes sense.” It’s fantasy at it’s base, and if it turns out to be halfway possible, great. What if it doesn’t? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don’t have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we’re closer to finally realizing the technology! Grow up already. This stuff isn’t reasonable, it’s just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
If it’s not rationa—No, you’ve stopped following along by now. It’s not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can’t make an argument within the context that it’s irrational because you’ve heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because “it’s obvious” and you don’t like the implications?
Seriously. Grow up. If there’s a reason for me to think LessWrong isn’t filled with children who like to believe in Magic 2.0, I’m certainly not seeing it.
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
The human mind is finite, and there are infinitely many possible concepts. If you’re interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .
Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology “Scientific”? and How probable is Molecular Nanotech?.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we’d need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I’d love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I’m not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don’t have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.
I need backing on both of these points. As far as I know, there isn’t enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don’t actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don’t know the maximum conceptual complexity of the human brain.
As far as there being infinitely many concepts, “flying car” isn’t terribly more complicated than “car” and “flying.” Even if something in the far future is given a name other than “car,” we can still grasp the concept of “transportation device,” paired with any number of accessory concepts like, “cup holder,” “flies,” “transforms,” “teleports,” and so on. Maybe it’s closer to a “suit” than anything we would currently call a “car;” some sort of “jetpack” or other. I’d need an expansion on “concept” before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I’m aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make “infinite” simply be inapplicable/nonsensical.
(nods) IOW, it merely demonstrates our inadequate levels of self-awareness and meta-cognition.
This doesn’t actually counter my argument, for two main reasons:
That wasn’t my argument.
That doesn’t counter anything.
Please don’t bother replying to me unless you’re going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you’re not willing to explain that, I’m not interested.
I don’t.
I often have conversations here that interest me, which is all the justification I need for continuing to have conversations here. If I stopped finding them interesting, I would stop spending time here.
Perhaps those conversations are childish; if so, it follows that I am interested in childish conversations. Perhaps it follows that I myself am childish. That doesn’t seem true to me, but presumably if it is my opinions on the matter aren’t worth much.
All of that would certainly be a low-status admission, but denying it or pretending otherwise wouldn’t change the fact if it’s true. It seems more productive to pursue what interests me without worrying too much about how childish it is or isn’t, let alone worrying about demonstrating to others that I or LW meet some maturity threshold.
Few places online appreciate drama-queening, you know.
Hypothesis: the above was deliberate downvote-bait.
I’m willing to take the risk. PM or public comment as you prefer.
I would prefer public comment, or to be exposed to the information as well.
How specifically can you be surprised to hear “be specific” on LessWrong? (Because that’s more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong.
Giving specific examples of “LessWrong is unable to discuss X, Y, Z” is so much preferable to saying “you know… LessWrong is a hivemind… there are things you can’t think about...” without giving any specific examples.
How specifically? Easy. Because LessWrong is highly dismissive, and because I’ve been heavily signalling that I don’t have any actual arguments or criticisms. I do, obviously, but I’ve been signalling that that’s just a bluff on my part, up to an including this sentence. Nobody’s supposed to read this and think, “You know, he might actually have something that he’s not sharing.” Frankly, I’m surprised that with all the attention this article got that I haven’t been downvoted a hell of a lot more. I’m not sure where I messed up that LessWrong isn’t hammering me and is actually bothering to ask for specifics, but you’re right; it doesn’t fit the pattern I’ve seen prior to this thread.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I do not represent Less Wrong, but you have crossed a limit with me. The magic moment came when I realized that BaconServ means spambot. Spammers are the people I most love to hate. I respond to their provocations with a genuine desire to find them and torture them to death. If you were any more obnoxious, I wouldn’t even be telling you this, I would just be trying to find out who you are.
So wake the fuck up. We are all real people with lives, stop wasting our time. Try to keep the words “I”, “Less Wrong”, and “signalling” out of your next two hundred comments.
ETA This angry comment was written while under pressure and without a study of BaconServ’s full posting history, and should not be interpreted as a lucid assessment.
Intriguing and plausible. Does this forum really have a hive mind with a self-preservation instinct? Since the comments you linked to are downvoted below the default visibility level, do you mind writing a Discussion post (or maybe a Main post, if you are adventurous enough) on the subject? These remain visible unless deleted. I wish I could incentivise you with a few hundred karma points, but alas, there is no karma transfer/donation mechanism on the site.
My guess is that most “criticism of LessWrong itself” is not well-received because “LessWrong itself” is not very specific, and so criticism of this vague idea typically isn’t able to take the form of clearly expressed thoughts.
The thoughts are crystal clear in my mind and can be expressed—and communicated—perfectly accurately with existing language. The problem lies in the inability of LessWrong to accept that there are concepts that it simply does not have concise language to express. It’s not that it can’t be communicate or is unclear in any way, it’s that LessWrong’s collective membership is not having the thoughts. They are dismissed as vague because they are not recognized as blatantly and obviously true. We’d need new vocabluary to concretize the concepts to the point of making the argument effectively believable from the raw communication.
LessWrong lacks the capacity to think beyond what’s written in a useful way.
Tell me you’ve actually thought about this for a full five minutes before you’ve bothered responding or voting. (Note: Five minutes will not be enough time to overcome your cognitive biases, even with the implied challenge that it will not be enough time to think clearly. (You do not have the ability to detect if your thought processes are clear or muddled with bias. “I don’t feel like I’m missing something,” isn’t a valid counter argument.))
My question isn’t “is this happening?”—my question is, “how big is the effect, and does it matter?” I suspect that’s the case for a lot of LW readers.
This is a recurring theme that I find over and over. These sorts of biases and problems are obvious, they are the kind of thing that are pretty much guaranteed to exist, the kind of thing you could hardly hope to escape from. But that does not in any way mean that the effects are large enough to be relevant, or that the time spent fixing them cannot be better spent elsewhere. It is not enough to say that it it worthwhile; you must show that it is worthwhile enough to compete with other options.
This implies that your article, should you decide to write it, would in fact be understood, and that a good proportion of the LW readership has in fact considered your platform. For your article to be effective, it may be necessary for you to lay out the extent to which these issues are an actual problem, instead of simply pointing out the obvious.
Let me put it this way: The effect is big enough that I have no qualms calling it a blanket inability. This should be implied by the rules of common speech, but people who consider themselves intelligent find it easier to believe that such confidence is evidence of irrationality.
What’s interesting is that you think such an article can actually be written. (Let’s ignore that I earned sub-zero karma with my posts in this thread today.)
Consider the premise:
(Obviously there are a few stray thoughts that you’ll find in the comments, but they are non-useful and do not generally proliferate into more descriptive articles.)
Let’s make it clear the purpose of such an article would be to get LessWrong to think about what’s written beyond what is written. This is necessary to make LessWrong useful beyond any other internet forum. Such an article would be advocating independent and bold thinking, and then voicing any compelling realizations back to LessWrong to spark further thought by others. A few short passes of this process and you could see some pretty impressive brainstorming—all while maintaining LessWrong’s standards of rationality. Recall that thought being a very cheap and very effective resource is what makes machine intelligence so formidable. If the potential for communal superintelligence isn’t sufficient payoff here, nothing will be.
Keep in mind that this is only possible insofar as a significant portion of LessWrong is willing to think beyond what is written.
If we suppose that this is actually possible; that superintelligent-quality payoffs are possible here with only slight optimization of LessWrong, then why isn’t LessWrong already trying to do this? Why weren’t they trying years ago? Why weren’t they trying when That Alien Message was published? You might want to say that the supposing is what’s causing the apparent question; that if LessWrong could really trivially evolve into such a mechanism, that it most definitely would be, and that the reason we don’t see it doing this is because many consider this to be irrational and not worth trying for.
Okay.
Then what is the point of thinking beyond what’s written?
If there aren’t significant payoffs to self-realizations that increase rationality substantially, then what is the point? Why be LessWrong? Why bother coming here? Why bother putting in all this effort if you’re only going to end up performing marginally better? I can already hear half the readers thinking, “But marginally better performance can have significant payoffs!” Great, then that supports my argument that LessWrong could benefit tremendously from very minor optimization towards thought sharing. But that’s not what I was saying. I was saying, after all the payoffs are calculated, if they aren’t going to have been any more than marginally better even with intense increases in rationality, then what is the point? Are we just here to advertise the uFAI pseudo-hypothesis? (Not being willing to conduct the experiment makes it an unscientific hypothesis, regardless of however reasonable it is to not conduct the experiment.) If so, we could do a lot better by leaving people irrational as they are and spreading classic FUD on the matter. Write a few compelling stories that freak everyone out—even intelligence people.
That’s not what LessWrong is. Even if that was what Yudkowsky wanted out of it in the end, that’s not what LessWrong is. If that were all LessWrong was, there wouldn’t be nearly as many users as there are. I recall numerous times Yudkowsky himself stated that in order to make LessWrong grow, he would need to provide something legitimate beyond his own ulterior motives. By Yudkowsky’s own assertion, LessWrong is more than FAI propaganda.
LessWrong is what it states on the front page. I am not here writing this for my own hubris. (The comments I write under that premise sound vastly different.) I am writing this for one sole single purpose. If I can demonstrate to you that such an article and criticism cannot currently be written, that there is no sequence of words that will provoke a thinking beyond what’s written response in a significant portion of LessWrongers, then you will have to acknowledge that there is a significant resource here that remains significantly underutilized. If I can’t make that argument, I have to keep trying with others, waiting for someone to recognize that there is no immediate path to a LessWrong awakening.
I’ve left holes in my argument. Mostly because I’m tired and want to go to bed, but there’s nothing stopping me from simply not sending this and waiting until tomorrow. Sleepiness is not an excuse or a reason here. If I were more awake, I’d try writing a more optimum argument instead of stream-of-consciousness. But I don’t need to. I’m not just writing this to convince you of an argument. I’m writing this as a test, to see if you can accept (purely on principle) that thought is inherently useful. I’m attempting to convince you not of my argument, but to use your own ability to reason to derive your own stance. I’m not asking you to agree and I’d prefer if you didn’t. What I want is your thoughts on the matter. I don’t want knee-jerk sophomoric rejections to obvious holes that have nothing to do with my core argument. I don’t want to be told I haven’t thought about this enough. I don’t want to be told I need to demonstrate an actual method. I don’t want you to repeat what all other LessWrongers have told me after they summarily failed to grasp the core of my argument. The holes I leave open are intentional. They are tripholes for sophomores. They are meant to weed out impatient fools, even if it means getting downvoted. It means wasting less of my time on people who are skilled at pretending they’re actually listening to my argument.
LessWrong, in its current state, is beneath me. It performs marginally better than your average internet forum. There are non-average forums that perform significantly better than LessWrong in terms of not only advancing rationality, but just about everything. There is nothing that makes LessWrong special aside from the front page potential to form a community whose operations represent a superingellient process.
I’ve been slowly giving out slightly more detailed explanations of this here and there for the past month or so. I’ve left fewer holes here than anywhere else I’ve made similar arguments. I have put the idea so damn close to the finish line for you that for you to not spend two minutes reflecting on your own, past what’s written here, indicates to me exactly how strong the cognitive biases are that prevent LessWrong from recursive self-improvement.
Even in the individuals who signal being the most open minded and willing to hear my argument.
The forum promotes a tribal identity. All the usual consequences apply.
If you felt really motivated, you could systematically upvote all my posts, but I’d prefer that you didn’t; it would interfere with my current method of collecting information on the LessWrong collective itself.
I’d write such a post, but my intention isn’t really aimed to successfully write such a post.
I’m more or less doing my best to stand just outside the collective so I can study it without having to divide myself out of the input data.
What do you mean by a measure here?
A class of measurements that are decidedly quantifiable despite lacking formal definition/recognition.
What content does “LessWrong” have here? If anything other than the “LessWrong” monad can be criticized… that sounds totally great. I mean, even if you mean something like “this group’s general approach to the world” or something that still sounds like a much better place than Less Wrong actually is.
Really, if it weren’t for the provocative tone I would have no idea you weren’t making a compliment.
Hah, interesting. I didn’t notice I was making an interpretation duality here.
I suppose that only further clarifies that I haven’t actually listed a criticism. Indeed, I think it is a potentially good thing that LessWrong has hive mind qualities to it. There are ways this can be used, but it requires a slight bit more self-awareness, both on the individual and collective levels. Truth be told, the main problem I have with LessWrong is that, at my level of understanding, I can easily manipulate it in nearly any way I wish. My personal criticism is that it’s not good enough to match me, but that implies I think I’m better than everyone here combined. This is something I do think, of course, so asking for evidence of the extraordinary claim is fair. Trouble is, I’m not quite “super”-intelligent enough to know exactly how to provide that evidence without first interacting with LessWrong.
My personal complaint is really that LessWrong hasn’t activated yet.
Thank you for the question though; it tells me you know exactly which questions to ask yourself.
Interesting. I’ve have expected at least a few intrinsic contrarians, having seen them nearly everywhere else, and group identification usually isn’t one of the things they’ll avoid slamming.
Absolutely no disagreement?
The thing about collections of people that exceeds Dunbar’s number is that no one person can perfectly classify the boundaries of the group. There’s a fuzzy boundary, but the people that “dissent LessWrong” causatively tend towards opting out of the group, while the people that don’t vocally dissent causatively tend towards losing the dissent. Each of these is suffering social biases in thinking that the group is unitary; that you’re either “part of” LessWrong or else you’re not. Awhile ago I formally serialized one of my thoughts about society as, “Groups of people live or die based on their belief in the success of the group.” If nobody believe the group is going to accomplish anything or be useful in any way, they’ll stop showing up. This self-fulfills the prophecy. If they continue attending and see all the members that are still attending and think, “Imagine the good we can do together,” they’ll be intensely motivated to keep attending. This self-fulfills the prophecy.
Did I mention this is something I realized without any help or feedback from anyone?
That’s usually not the case, or at least not to a hard extent. Even sub-Dunbar’s Number groups tend to have at least a few internal contrarians :there’s something in human psychology that encourages at least a few folk to keep the identity while simultaneously attacking the identity.
I’ve been that person for a while on a different board, although it took a while to recognize it.
True, the trend for the average person is to strongly identify within the group or not, but I’ve never seen that include all of them.
That assumes the only point of group membership is to see what the group accomplishes. There are other motivations, such as individual value fulfillment, group knowledge, ingroup selection preferences, even the knowledge gained from dissenting. There are some sociology experiments suggesting that groups can only remain successful in the long term if they sate those desires for newcomers. ((And even some arguments suggesting that these secondary, individual motivations are more important than the primary, group motivation.))
The demographic analysis is fascinating, especially if correct. The rough group analysis is not especially fresh outside of the possible lack of contrarians in this instance, though independent rediscovery is always a good thing, particularly given the weakness of social identity theory (and general sociology) research.
I’d caution that many of the phrases you’ve used here signal strongly enough for ‘trolling’ that you’re likely to have an effect on the supergroup even if you aren’t intending it.
Oh, troll is a very easy perception to overcome, especially in this context. Don’t worry about how I’ll be perceived beyond delayed participation in making posts. There is much utility in negative response. In a day I’ve lost a couple dozen karma, I’ve learned a lot about LessWrong’s perception. I suspect there is a user or two participating in political voting against my comments, possibly in response to my referencing the concept in one of my comments. Something like a grudge is a thing I can utilize heavily.
I’d expect more loss than that if someone really wanted to disable you; systemic karma abuse would end up being either resulting in karma loss equal to either some multiple of your total post count, or a multiple of the number of posts displayed per user history page (by default, 10).
Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.
It wasn’t retroactive when I did this test a while back. Natch, code changes over time, and I haven’t tested recently.
I iterated my entire comment history to find the source of an immediate −15 spike in karma; couldn’t find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I’m unsure if the retroactive quality of it is immediate or on a timer but I don’t see any reason it wouldn’t be immediate. Feel free to test on me, I think the voting has stabilized.
I’m utterly unclear on what evidence you were searching for (and failing to find) to indicate a source of an immediate 15-point karma drop. For example, how did you exclude the possibility of 15 separate downvotes on 15 different comments? Did you remember the previous karma totals of all your comments?
More or less, yeah. The totaled deltas weren’t of the necessary magnitude order in my approximation. It’s not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.
Gotcha; I understand now. If that’s actually a reliable method of analysis for you I’m impressed by your memory, but lacking the evidence of its reliability that you have access to I hop you’ll forgive me if it doesn’t significantly raise my confidence in the retroactive-karma-penalty theory.
Certainly; I wouldn’t expect it to.
Great post, thank you for taking the time to write it. It’s insightful and I think clearly identifies the problem with Less Wrong. It could probably do with being a little bit less provacative, i.e. by removing references to Less Wrong being a “hive”. Upvoted for valid criticism.
You’re misunderstanding; I am not here to gain karma or approval. I am not reformed by downvoting; I am merely informed. I know well the language LessWrong likes and hates; I’ve experimented, controls and everything. I didn’t say this because I was willing to write more about it; I wrote it because I’d already pre-determined you’d be sympathetic. I am not part of the hive and I will not be dissecting or reforming it; that is yours if you should see value in LessWrong. My job is not to sugar-coat it and make it sound nice; I can derive utility from downvotes and disapproval—and in more ways than the basic level of information it provides. I’m not going to call it something it isn’t and use terms that make it seems like anything less than a hive mind. I am giving my full opinion and detecting agents worth interacting with; I am not here to participate as part of the hive. It is not yours to defer to me as if I was going to resolve the problem in my capacity as a member of the hive; I highlighted your being on the border in my capacity as an agent beyond the concept of being inside or outside the hive. I can enter and leave freely; one moment inside, one moment outside. I am here to facilitate LessWrong, not to advance it.
I appreciate the sentiment though. :D
I’m really at a loss for reasons as to why this is being downvoted. Would anyone like to help me understand what’s so off-putting here?
It’s boring.
I’m not sure how to act on this information or the corresponding downvoting. Is there something I could have done to make it more interesting? I’d really appreciate knowing.
To be clear: I replied before you edited the comment to make it a question about downvotes. Before your edit you were asking for an explanation of the inferential silence. That is what I explained. The downvotes are probably a combination of the boringness, the superiority you were signalling and left-over-bad-feeling from other comments you’ve made tonight. But I didn’t downvote.
Given the subject and content of the comment it probably couldn’t have been substantially less boring. It could, however, have been substantially shorter.
No, what gets downvoted is when a newbie starts criticizing aspects of lesswrong in a way that indicates he has no clue what he’s talking about.
I suspect that Adele_L means “contrarian within LW”. There are vocal opponents of cryonics being a rational choice/investment, of many worlds being a “slam dunk”, of UFAI being an x-risk and of other prevailing opinions here. When they present their arguments in a thoughtful manner, they don’t necessarily get downvoted below the default visibility threshold.
The flip side of that advantage is that anytime anybody criticizes anything about that party, there’s some social pressure for you to just nod along and say “Yup, they sure are crazy and racist”, even if you don’t actually agree with the criticism. I don’t find that very conductive to truth-seeking either.
This is true to some extent, but much less so than the inverse. I have at times found myself “defending” the racist party by pointing out that even though they get much of the racist vote, they aren’t actually explicitly racist. And they do actually have a point on a couple of issues. Rarely if ever have I gotten in trouble for this.
On the other hand, I hear these stories from the US where on thanksgiving, some uncle will state that “the gays are getting uppity, somebody needs to put them in their place”, and nobody says anything in order to keep the peace. This silence bothers me much more than the inverse.
It helps to remind yourself that the silence strongly indicates that everyone is extending the courtesy of allowing him to have wrong opinions. If someone reinforces with, “Damn straight!” it sends a different signal entirely. Often times, the best you can do it politely signal that you’d rather talk about something else, strongly implying they have said something offensive. People tend to pick up on that on some level.
This can come across very differently if you’re the target of the comment.
A lot can come across differently when you’re trapped behind an inescapable cognitive bias.
ETA: I should probably be more clear about the main implication I intend here: Convincing yourself that you are the victim all the time isn’t going to improve your situation in any way. I could make an argument that even the sympathy one might get out of such a method of thinking/acting is negatively useful, but that might be pressing the matter unfairly.
It sounds like you believe that treating silence as a way of expressing that the opinion enjoys social support is the result of bias, but that treating silence as a way of expressing that the opiner deserves courtesy though the opinion is wrong is not the result of bias.
Do you in fact believe that?
If so, can you provide any justification for believing it? Because it seems implausible.
I’d need an expansion on “bias” to discuss this with any useful accuracy. Is ignorance a state of “bias” in the presence of abundant information to the contrary of the naive reasoning from ignorance? Please let me know if my stance becomes clearer when you mentally disambiguate “bias.”
If you feel like responding, you can assume I mean by “bias” whatever you meant by it when you used the word.
Conversely, if you feel like turning this into an opportunity for me to learn to clear up my mental confusions and then demonstrate my learning to you, that’s of course your call.
If I experience such an epiphany I may let you know whether your stance thereby becomes clearer to me.
Hah. I like and appreciate the clarity of options here. I’ll attempt to explain.
A lot about social situations is something we’re directly told: “Elbows off the table. Close your mouth when you chew. Burping is rude, other will become offended.” Others are more biologically inherent; murder isn’t likely to make you popular a party. (At least not the positive kind of popularity...) What we’re discussing here lies somewhere between these two borders. We’ll consider aversion to murderers to be the least biased, having very little bias to it and being more a rational reaction, and we’ll consider asserted matters of “manners” to be maximally biased, having next to nothing to do with rationality and everything to do with believing whatever you’re told.
It’s a fuzzy subject without fully understanding psychology, but for the most part these decisions about social interaction are made consciously. In the presence of a biased individual, for whatever reason and whatever cause, if you challenge them on their strong opinions you’re liable to start an argument. There are productive arguments and unproductive arguments alike, but if the dinner table is terribly quiet already and an argument breaks out between some two members, everyone else has the option of “politely” letting the argument run its course, or intervening to stop this silly discussion that everyone’s heard time and time again and are tired of hearing. Knowing all to well how these kind of things start, proceed, and stop, the most polite thing you can do to not disrupt the pleasant atmosphere that everyone is pleased with is simply not to indulge the argument. Find another time, another place. Do it in private. Do whatever. Just not now at the dinner table, while everyone’s trying to have a peaceful meal.
There’s an intense meme among rationalists that whenever two rational agents disagree, they must perform a grand battle. This is just not true. There are many many opportunities in human interaction to solve the same problem. What you find is that people never work up the courage to do it ever, because of how “awkward” it would be, or any other number of excuses. “What if s/he rejects me? I’ll be devastated!” Intelligent agents are something to be afraid of, especially when their reactions control your feelings.
The courtesy isn’t so much for the opiner as it is for everyone else present. It is a result of bias, but not on the part of the people signaling silence; they’re just trying to keep things pleasant and peaceful for everyone.
Of course my description here could be wrong, but it’s not. The easy way to determine this is to ask each person in turn why they chose to be silent. Pretty much all of them are going to recite some subset of my assessment. Some people may have acquired that manner from being instructed to hold that manner, while others derived it from experience. The former case can be identified by naive confusion, “Mommy, why didn’t anyone tell him he was being racist?” You’ll understand when you’re older because people periodically fail to recognize the usefulness of civility. You’ll see it eventually, possibly coming from the people who were surrounded by mannerly people to the degree that they never were able to acquire the experience that got everyone else to adopt that manner. Even if it makes sense rationally, it could be the result of bias, but it can be hard to convince a child of complex things like that, so the bias doesn’t play a role beyond that that person finding that the things they were told as a child that they distinctly remember never understanding growing up did actually make sense in reality.
You can’t fault the child for being ignorant, but you can fault them for not recognizing the truth of mother’s words when the situation comes up that’s supposed to show them why the wisdom was correct. If they don’t learn it from experience like everyone else does, something went wrong. Possibly they overcompensated when they rejected Christianity and thought that it was a total fluke that their parents were competent enough to take care of a child. All those things that didn’t have to do with Christianity? Nope. Biased by Christianity. Out the window they go, along with the bathwater. When grandma says something racist and everyone goes silent, that is not tacit approval, that is polite disapproval. To not recognize something so obvious is going to be the result of some manner of cognitive bias, whether it’s a mindset of being the victim, white knighting on Tumblr’s behalf, an extreme bias against Christianity, etc.. Whatever it is that makes you think your position that contradicts the wisdom handed down and independently verified by generation after generation of highly intelligent agents capable of abstract reasoning is something that contradicts rationality.
Our ancestors didn’t derive quantum mechanics, no. That doesn’t make them unintelligent by any stretch of the imagination. When it came to interacting with other intelligent agents, we had intense pressure on us to optimize, and we did. Only now are we formally recognizing the principles that underlie deep generational wisdom.
So to answer concisely:
Barring that “treating silence as a way of expressing that the opiner deserves courtesy” is the result of bias, but that the bias originates in the opiner, not the analyzer of the silence, if we’re speaking strictly about the analysis of silence in modern social settings...
Yes.
I can cite a pretty large chunk of the history of civilized humanity, yes.
The confusion is arising from your misunderstanding that decision theory is embedded more deeply in our psychology than our conscious mind—primitive decision theory (everything we’ve formally derived about decision theory up to this point) is embedded in our evolutionary psychology. There’s a ton more nuance to human interaction than social justice’s founding premise of, “Words hurt!!! (What are sticks and stones?)”
OK. Thanks for the clarification.
Also, while we’re on the subject, would you mind tabooing what you mean by “racist”.
Your main point is basically wrong. Political differences really are about values. Parties do differ in their factual claims, but these claims are usually merely to undermine the other sides’ advocated policies. It’s funny that you brought up this evil “racist party” as an example, since racism is obviously about preferences rather than facts. The fact that your friends agree they are awful doesn’t mean their preference is wrong, it just means your friends don’t share their values. It’s hard to believe you don’t realize this, but I guess most people are unable to take the outside view of their own beliefs.
Mainly, it seems like you just want another place to complain about how evil, stupid, and racist your political enemies are. Are there not enough places to do that online?
Preferences can be wrong, in particular if they are caused by mistaken factual beliefs. It’s the same principle as with mistaken emotions: correctness conditions on factual beliefs extend to correctness conditions on consequences of those beliefs, so that consequences of incorrect factual beliefs are potentially suboptimal, and corrections to the beliefs could be propagated as corrections to their original consequences.
Is it? If I publicly state that the mean IQ of black people is about a standard deviation below the mean IQ of white people, I will be labeled a racist in an instant. Which preferences did I express?
Of course, preferences can be inferred from what facts you choose to publicly state. For example, if you publish a pamphlet all about crimes committed by blacks, people can infer from that something about your goals (i.e. to encourage distrust of blacks).
Perhaps some people would claim merely believing black IQ is lower is racist. But this clearly is not the sense in which Sophronius was using “racist.” It is Sophronius’ context to which my comment applies.
In case it wasn’t clear to you, Sophronius was referring to the Dutch Freedom Party. Whether they are “racist” or not simply depends on your definition of the word. You could use the term “nationalist,” or whatever. It doesn’t matter. The word “racist” is just being used as a shorthand about their beliefs. It is clear that the salient point of disagreement between liberals like Sophronius and the Dutch Freedom Party is their values.
That wasn’t evident to me at all, I was reading you as making an observation about racism in general. Sophronius clearly wants to discuss issues beyond those specific to Dutch politics.
Sophronius’ desires aside, I am interested in your thoughts about knb’s answer to your actual question.
Whether racism is about preferences or facts? Though there is a variety of definitions, I think it’s mostly about beliefs (which may or may not be based on facts and which may or may not be expressed as preferences).
OK. Thanks for clarifying.
It depends on where you state that, and which words you use.
A preference for saying politically incorrect things?
And is that preference racist?
Political arguments usually involve falsifiable factual claims which may or may not be wrong.
It’s possible that political differences are, at their core, really about values, but political debates are often more about fact rather than values, possibly because people might be embarrassed to publicly state their actual values and/or want to convince people with other values.
For instance, if you are in the upper class and don’t particularly care about the welfare of strangers, then it is probably in your best interest to advocate for tax cuts funded by cuts of public expenses on things you are unlikely to benefit from, such as public healthcare.
But of course very few people are going to openly claim that they want public healthcare cuts for their personal interest. They will argue that the taxation level is so high that it stifles economy, that public healthcare is inefficient, that it creates “death panels”, etc.
Factual claims are made to justify a policy as serving the public interest.
Even if I disagree with the assertion that this article is merely for that purpose, you raise a compelling argument: Upvoted.
I think that’s a tautology. Major issue of contention means in the US the the major parties have opposing views on the subject. If both parties share the view, then the US doesn’t treat the matter as major issue of contention.
One example would be the war of drugs. Another would be whether the US president is allowed to kill US citizens that live abroad without due process. In the core US debate those aren’t major issues of contention because the two parties basically agree on them.
I don’t think post with nuance get upvotes because they are fair. Nuance helps people to understand the world better. A post that just says X is crazy doesn’t help anyone to update his map of the world.
In most political a good post isn’t about judging which side is stupid or crazy but about actually understanding the issue at depth.
A tautology is when the two are the same by definition, I think you mean that there is almost complete overlap. The latter is something I would still disagree with, as in cases of for example gay marriage, the general populace was much more in favour than you would guess from listening to either political party until recently.
As for your second point, I was not taking issue with nuanced posts. Nuance is great for dealing with complex issues. The issue I have is with the tendency to upvote posts that show both sides of an issue as being equal regardless of whether or not this is actually the case.
No, I mean to say that “major issue of contention” can mean in the US that it’s a topic where there contention between the two major parties.
Latest gallup poll indicates 52% to 43% for the general population approving of same sex marriage. I don’t think that’s far of what y
Could you give examples? I personally haven’t observed that pattern on LessWrong.
I would upvote an attempt to actually measure LW’s political bias.
Well, there’s a Lesswrong census every year, and that includes questions on political affiliation.
link to 2012 results
Other than that, I’m not sure how you would measure political bias.
Bias is something different than having a political affiliation.
Bias means that you make are irrational in some way.
And humans, even lesswrong readers, are all varying degrees of irrational. Therefore understanding the distribution of political affiliation of people that use the site is a significant step towards understanding the site’s bias.
It may be evidence, but it still leaves as an open question whether political affilitations are slanted as a result of greater rationality or a political bias. Without some sort of controlled experiment this would be hard to tell.
If they aren’t slanted, it either means that what we discuss is not related to politics (implausible) or that Lesswrong doesn’t have an impact on such matters.
I’m not sure how to measure it, either—hence my pledge of karma for whoever figures it out. :)
LOL. So you’ve safely signaled your tribal identification. And it seems you think it’s the “only reasonable” thing to do :-D
Why not name the party you think is crazy, what is gained by this indirect approach? Let me do it for you: Republicans are crazy. Monarchy has its flaws certainly, but it is better than the other forms of government mankind has tried from time to time.
Disagree, but upvoted because I’m in favor of people openly expressing their political views on LessWrong.
I do think Monarchy is preferable to a Republic in general, but I should perhaps clarify the above post was a joke, meant to show precisely that whether Republicans in particular are “crazy” for the particular reasons that Sophronius stated elsewhere is a question that makes sense only in a narrow political view of 2013, give or take a few years.
I actually do identify myself as a republican, though with some reservation: We’ve been a kingdom around here for so long now that getting rid of it would likely be more trouble than it’s worth. It’s all about the pragmatism.
I don’t think Republicans are significantly more crazy, either in the sense of not sharing my values or having a bad map of reality, in their policy than Democrats, both are pretty crazy. Maybe one of them has slightly easier to detect craziness.
For what it’s worth, I see more Republicans leaving their party because they think it’s gone mad, while Democrats become Republicans because they come to believe the Democrats are wrong.
If “everything that can be destroyed by the truth should be”, why aren’t we discussing Edward Snowden?
In what context? As an example of agency/heroic responsibility?
I don’t think that would be yield much insight. I’m more interested in the dynamics of mistrust, especially as an obstacle to collective intelligence. And although the NSA scandal provides some rich material for thinking about that, I have refrained from bringing it up because I too have, perhaps falsely, perceived LW as a place where political topics are sniffed at.
I think Charlie Stross knocked that one out of the park. (Longer version, login-walled.)
Yudkowsky.
The best way of handling mindkilling is to look at hard data.
To some extent you may have a valid point, but parties are extremely diverse entities. Even if one looked at small, fringe parties, there’s heavy variation in the beliefs. So, you might have a more valid point if you said something like “Self-identified Republicans are on average more likely to believe crazy things than self-identified Democrats.” Now, this will run into other issues because party identification if fluid, but it would be a start.
So, let’s use some beliefs that are by a largescale consensus “crazy” that are stereotypically associated with specific ends of the political spectrum in the US. I suggest the following four: “Barack Obama is a Muslim” (associated with the right), “The government was involved in 9/11” (left). It would be interesting to look at others that are more straight scientific issues, such as homeopathy works (left), vaccination is bad/causes autism(left), evolution is wrong (right). Now, let’s look at the data, but I don’t have the time to do so. So let’s focus on the two essentially conspiratorial claims.
The most recent poll I can find for Obama being a Muslim is here. Approximately 30% of Republicans think that Obama is a Muslim. Curiously, approximately 10% of Democrats think Obama is a Muslim (this is likely connected to the fact that 5-10% of any poll will be extremely confused or just give nonsensical answers).
Unfortunately, very few of the polls about beliefs about 9/11 ask for party identification, but there’s a Rasmussen poll indicating that around 35% of Democrats answered yes to Bush knowing about 9/11. See here. They don’t state the actual percentage of Republicans, and the actual poll seems to be behind a paywall. Moreover, other polls have gotten much smaller total percentages of belief, so actually getting data here may be tough. But at least from these two, it looks plausible that about an equal percentage of Republicans and Democrats are being crazy for their thing (although slightly higher numbers of Republicans may be saying yes to the Democratic brand of crazy, that’s hard to really tease out from the data, given margins of error, differences in questions, timing, and other issues).
Now, you can argue that there’s a difference in how seriously these beliefs are taken by the leadership in each party. So what’s the highest ranking politicians who have said that Obama was a Muslim? Well, it is easy to find high-ranking Republicans who think that Obama is getting advice or orders from the Muslim Brotherhood. Louie Gohmert is one of the louder examples. But that’s not the same as claiming that he’s a Muslim. So let’s now look at the reverse. The closest analog for the 9/11 issue then to Gohmert would be people wanting a new investigation. It turns out that’s a pretty large set . See here. It ranges throughout the political spectrum, and it isn’t easy to tell without much more work which of those want a new investigation because they think the 9/11 Commission didn’t do a great job and how many want a new investigation because they think it was the Illuminati/Rosicrusians/Jews/Daleks etc. So, actually looking at this metric may be tough.
Note it occurred to me while doing this, that birtherism might have been a better analog than claiming that Obama is a Muslim, and one does in fact get high-ranking politiicans endorsing that. See e.g. here. My impression is that genuinely crazy ideas are much more likely to be taken seriously by leadership on the right than leadership on the left, but getting substantial evidence for that is likely to be tough..This data at least suggests that any different between the two among the party base is small. It would be interesting to look at a bunch of other issues in a systematic fashion. While that would be fun to look at it, it wouldn’t by itself say much of anything about on any specific policy issue where Republicans or Democrats are correct.
This is not actually all that objective since it’s not clear what constitutes a “crazy belief”. Is it simple a matter of how much easily available evidence there is against it? Or does it also include considerations like what proportion of people believe it and how much effort smart people have devoted to rationalizing it?
Ideally, yes (and I upvoted this for its insight), but that can easily becomes a Fully General Counterargument if we aren’t EXTREMELY careful—since “how much effort smart people have devoted to rationalizing it” can look like “how much easily available evidence there is against it”, and vice-versa.
As people have mentioned, this is a Very Hard Problem.
Well the context was determining which beliefs were “crazy” for use in a meta-analysis, not determining what to believe directly.
Also frankly I can think of numerous areas where there is a lot of easily available evidence that a lot of smart people have devoted a lot of energy trying to rationalize away or at least train people not to notice, the correlation between race and both propensity to violence and intelligence being the most obvious example.
In this particular example, though, I don’t think (non-mindkilled) people try to train people not to notice it, and in most rational cases that I’ve seen, it isn’t “rationalized away” so much as people spend a good deal of effort insisting that correlation does not always imply causation, and that social policies which take that correlation into account need to be VERY careful not to repeat tabooed historical solutions. I think if “I agree denotationally but object connotationally” were a more widespread concept, we would see a lot of this tendency disappear.
Well, by that standard nearly everyone is mindkilled about this issue.
Well that depends on what theory one means by “causation”. There are three main theories about the cause of this correlation:
1) The genetic theory.
2) The cultural theory, i.e., contemporary black culture has elements that discourage intelligence and encourage violent behavior.
3) The implicit racism theory.
On the occasion most intellectuals are willing to admit this correlation exists at all, they immediately insist that it is completely due to (3) and proceed to ingeniously seek ever subtler forms of racism.
Well, the current solutions, all based on theory (3), aren’t working. The rational response to this evidence would be to assign more weight to theories (1) and/or (2). This, however, is considered unacceptable so people wind up searching for ever subtler forms of racism to explain why the correlations persist.
Have you considered the alternative hypothesis that you’re hyper-sensitized to noticing people who believe 3-and-ONLY-3, and are therefore ignoring evidence of people who believe 2-and-3, or occasionally 2-and-3-with-a-smattering-of-1? Because I know of PLENTY of people who agree that 2-and-3 are likely causes (with maybe a weak influence from 1), and operate together in a feedback loop—but I also know plenty of people that listen to the feedback loop theory and hear “so it’s really all 3 then” instead of the actual message of “it’s complicated, but 2 and 3 are entangled together in ways that make 2 difficult to treat without dealing with 3, and that make 3 difficult to stop as long as 2 continues”.
And part of the problem is, people can’t say “2-and-3 with a weak influence from 1” without having people jump up and down on them and say “SEE? SEE! YOU DO ADMIT 1! YOU ADMIT 1! THAT MEANS CONTINUING 3 IS JUSTIFIED! I WIN! I WIN!”, which is a strong emotional disincentive to admitting 1 AT ALL. Going on and on and on about how “racial differences are real and EVERYONE IS LYING SO STOP LYING DAMNIT” is a great way to ensure that people become MORE mind-killed, because it doesn’t leave a line of retreat.
Yes, I’m aware of the feedback theory. I will point out that you still have the problem of how other groups, e.g., Jews, Irish, Asians, were able to break out of the feedback loop.
Also why aren’t there any proposed interventions in the (2) part of the feedback, e.g., actively criticizing ghetto culture? Notice, that the groups I listed above broke out of their feedback loops while the wider culture was focusing on (2) (and to a certain extent (1)) rather than (3).
Then as an aside (and I will address the rest of your post momentarily), why did you assert:
Secondarily, there are plenty of proposed interventions in the (2) part of the feedback, both from blacks and whites. Bill Cosby is rather famous for them; Chris Rock also has a particularly poignant set of criticisms. Part of the problem is that plenty of people have polluted the “criticize (2)” pathway by using it as a justification for reinforcing process (3) - “concern trolling” is a well-worn path.
As for the other groups, each of those groups came into their situations differently; I might suggest comparing black culture to native American culture, rather than Jewish, Irish or Asian culture. Sociology and history are complex, and outcomes are highly path-dependent.
This assertion looks true to me by empirical observation.
Can you show some examples of whites proposing interventions into black culture (and not immediately being tarred and feathered)? You mentioned two black guys.
If you add the second criteria (not immediately being tarred and feathered), it becomes more difficult. I’m aware of several hypotheses for why that may be. Would we like to discuss them?
The reasons look pretty obvious and hardly a mystery to me.
I’d prefer to backtrack a bit to the list of the three hypotheses about the black-white gap. Are you asserting that reason(1) is insignificant and the real cause is the feedback loop due to (2) and (3)?
That depends on what you mean by “insignificant”. I think that (1) has less of an affect than (2)+(3) by about half an order of magnitude or so, AND that (1) is much harder to do anything about in an ethical manner than (2)+(3). Is that the same thing?
Let’s use numbers! :-)
The black-white IQ gap is about 15 points or about 1 standard deviation. Half an order of magnitude is five times greater. So you are saying that the genetic component has the effect of about 2.5 IQ points and the culture+racism have the effect of about 12.5 IQ points. Correct?
Depends on your ethics. Not to mention that reality doesn’t care about what’s easy to do in an ethical manner and what’s not.
Sweet! I love this part. :)
Actually, half an order of magnitude is 3.2ish times greater, since “orders of magnitude” are logarithmic. Half an order of magnitude means 10 ^ 0.5, not 10 0.5, since two orders of magnitude means 10 ^ 2, not 10 2.
So I would be saying the genetic component has the effect of about 5 IQ points on average, and the culture+racism has the effect of about 10 IQ points, if all we’re talking about is IQ points. When I made the assertion I was thinking more about outcome quality in general, but thinking about it, I think that somewhere between 3 genetic/12 cultural and 5 genetic/10 cultural sounds highly plausible; I’d be willing to peg those as the endposts for my 90% confidence interval, and I’d be willing to update quite a bit given particularly challenging evidence to the contrary (but it’d have to be particularly challenging evidence).
No, but humans tend to.
Depends on how you treat this, but OK.
Well, even if culture+racism is 3.2 times more important that translates to about 3.6 points for genetic and 11.4 for c+r. However I don’t think precise numbers affect the argument here.
So, did you actually look at any evidence? There is a lot of it.
Given your preferred hypothesis, what would you expect the IQ of African populations be? They share some genes with the Aftican-Americans, but don’t share the culture and there shouldn’t be much suppressive racism outside of South Africa during the last 50 years or so.
I’m not sure this is the right way to be looking at the issue. It’s implausible that racism directly affects IQ; your stem cells don’t go out and check other people’s opinions of your ethnic background before they develop into a central nervous system. The idea is more that it’s associated with environmental factors that are reflected as a lower actual or apparent IQ: worse nutrition or other types of neglect in childhood, for example, or less motivation. It’s plausible that these more or less closely mirror what you’d see in nations without the same racial politics but which are unstable in other ways—and much of sub-Saharan Africa does have that reputation. (The continent’s own ethnic politics might also play a role—American-style racism isn’t the only type out there. How do Japanese-born ethnic Koreans do in comparison to Korean-born Koreans?)
One possible way of testing this would be to look at rapidly developing African countries in comparison with flatlined ones (Google Public Data is good for picking out which are which) and see if that’s reflected in IQ, if the data exists at that granularity. Other ways of breaking it down might also be useful: rural vs. urban, say, or by socioeconomic status.
Well, yes, if we un-anchor from the way the discussion went in this thread, the basic issue is nature or nurture—are IQ differences caused by genes or by some/any/all “environmental” factors which can range from cultural oddities to micronutrient deficiencies.
My impression—and I’m too lazy to go, collect, and array the evidence properly—is that while it’s clear that environmental factors can suppress IQ in populations, after you correct and adjust for everything that comes to mind, the IQ gaps persist.
If by IQ you mean one’s performance on IQ tests, rather than the g-factor they seek to measure, there is a not-so-implausible mechanism by which racism can affect the former.
IAWYC but children retain lots of neuroplasticity even after their central nervous system has developed, and even adults do a little bit.
I meant g in that sentence, yes. The bit about motivation later was alluding to stereotype threat and similar effects.
Point taken re: neuroplasticity. It doesn’t seem likely that that’s an overwhelmingly large contributor to adult intelligence, but correlation between adult and childhood IQ scores isn’t so high that it couldn’t be playing a role. I’d be interested to see how that correlation changes between populations, now.
That depends, do they share cultural overlap with Protestant Europe/America, or with post-Confuscian Asia? And if so, how much?
What are the IQs of other aboriginal cultures like, that diverged from, say, Asian or Polynesian stock, but also lack cultural influence from Protestant Europe/America or post-Confuscian Asia?
Yes, I’ve looked at evidence, but under which lens should I have looked at that evidence?
The question was “what would you expect..?” I am sure that it depends, but what is the outcome?
The culture of African-Americans clearly has more “cultural overlap” than the culture of Africans—would you agree? Given this, would you claim that African-Americans (after adjusting for the percentage of white, etc. genes that most of them have) have considerably higher (10-12 points) measured IQ than Africans?
I find “matching reality” to be a reasonably good lens :-)
Last time I looked the former did have considerably higher measured IQ than the latter (around 85 vs around 70), so what’s your point?
Did you adjust for the percentage of white genes? Most African-Americans are about 25-50% non-black by ancestry, as far as I remember. That would influence the mean IQ.
When comparing populations of third-world and first-world countries you also have to be very careful to account for things like malnutrition, etc.
Looks like we already had this same conversation before. ;-)
Ah. You have a better memory than I do :-)
No, actually I remembered mentioning something about Ethiopians having lots of Caucasian DNA, used Wei Dai’s tool to search my comments for it, and… I was kind of surprised it was talking to the same person that time too.
That sounds overconfident.
On a seperate channel:
I didn’t ask if you were aware of it, I asked if you had considered it.
I am going to make a criticism about tone, here, but please understand that I’m not doing this as an attempt to refute or dismiss your argument; I’m doing this out of a legitimate desire to help you get your message across to people who would otherwise be unreceptive to it. You have a tendency to force your idea of other people’s position down their throats; it often comes across as wanting to ‘win points’ against liberal positions, rather than an attempt to actually seek truth. If you avoided personalizing language when criticizing liberal positions, you might help people see an opportunity to distance themselves from those positions, and thus help people avoid mind-killing emotional responses to having their positions challenged.
That’s a really good point. I was thinking purely in terms of evidence levels against the belief but how much resources is spent rationalizing it might matter. I was trying to avoid thinking too much of that by using the most obviously crazy beliefs all around, but if there’s systematic rationalization attempts more for one than another that might not help.
Well, your examples are not very well balanced by level of evidence against, although it’s hard to compare this across different domains.
Could you expand why you think they don’t have about the same levels of evidence against? They seemed to to me, but it is possible that I’m missing something. I agree that making such comparisons across domains may be tough.
Oops. I was comparing Birtherism to controlled demolition theories and forgot that not all 9/11 Truther theories were that crazy.
Interesting reply. The problem with examining this is that it’s really hard to objectively examine who holds the most crazy ideas: Ideally you would just pick X random crazy ideas and then see who holds them the most, but human brains are no good at this.
Instead what I do is look at the presidential elections and debates and see what the important people of each party actually say. You have to admit that the republican primaries were absolutely nuts, and the same cannot be said of democratic primaries. Of course Mitt Romney suddenly became much much more moderate as soon as he won the primaries, but to me this indicates that there is currently far more pressure among republicans to espouse crazy beliefs when talking to their base at least. What’s worse, I suspect that a large number of republican candidates really believed what they were saying, though I suspect that Romney was just being pragmatic. As soon as he won the primary, Hotelling’s law ensured that he had to make a swift jump to the middle to stand a chance, which he duly did.
Which is which? But be warned as a former state senate candidate for one of the major U.S. parties and someone who thinks he knows a lot about economics and foreign policy I will have a very low opinion of you if you think my party is “crazy” unless you have a sufficiently impressive understanding of economics and foreign policy so that you can dismiss as crazy someone with my background. After all, if you are calling my party crazy, you are calling my beliefs crazy and you think that if we were to get into a debate about U.S. economic policy you could easily defeat me.
I don’t trust the judgment of anyone who says they don’t believe in evolution.
A politician’s irrational beliefs about economics have a much larger effect on his ability to do his job than irrational beliefs about historical biology.
Politicians don’t always say what they believe. Plus, conditional on the Christian God being real, you should reject evolution and almost all American politicians claim to believe in the Christian God.
Wait doesn’t the Catholic Church accept evolution (with certain qualifications)?
http://www.newadvent.org/cathen/05655a.htm
etc.
I think the Catholic Church is PR savvy enough to realize that at this point wholesale denial of evolution is like wholesale denial of heavier than air flight (in both cases, the phenomenon is so well established that there are businesses that rely on it).
It looks like the Catholic Church was never strongly against evolution, and has since taken up the idea that evolution happened, though God was involved in the evolution of the human race.
http://en.wikipedia.org/wiki/Catholic_Church_and_evolution
Opposition to evolution is a distinctively Fundamentalist position—it’s not characteristic of Christians in general.
http://en.wikipedia.org/wiki/Christian_fundamentalism
Imagine all politicians of party A know a lot about economics. But because it polls better party A has an economic position that completely irrational.
Would you say it’s fair to summarize party A as irrational in the aspect of economics?
Yes
That’s only true to the extent that the party, as an organization, accurately reflects your beliefs and desires (unless your belief is “my party is right”, in which case you’ve been mindkilled).
If the example of a political party is too contentious, consider a lynch mob or a committee. Group psychology is more than just the sum of over its members; in extreme cases, the group can act in ways that no particular member approves of.
If you have devoted a lot of resources to a “crazy” political party there is probably something wrong with you.
Well, maybe. If your investment goes back decades and the party only went crazy recently, then at worst you’re a victim of mental inertia. If your investment is part of a plan to de-crazify the party, then at worst you’re tilting at windmills.
It’s hard to write anything else without abandoning the pretense that we’re discussing a hypothetical, so I’ll leave it there. A general point, though: I’ve long suspected that it’s bad mental hygiene to think of any particular political party as “yours”, even if you’ve been elected on its platform. It’s a special case of keeping your identity small.
But when you run for office as I have and have friends who have run in the same party it almost has to become “yours”.
This is telling and frightening. Do you earnestly believe the entirety of half a nation agrees with you?
While I disagree with the strong form of Aumann’s agreement theorem, by the time we’re talking a state senatorial position, you probably should be exchanging enough information with everyone responsible for your party’s position as to at least reduce any gaps. There are possible stable orbits outside of complete agreement, but the mechanic involved for state senators favors strong agreement.
Also, folk often conflate the position of individual politicians with the positions of their party just as the reverse, so it kinda is meaningful in that setting, as well.
This is different from the actual populace of the entire nation agreeing with you, since:
Much of the population doesn’t vote at all.
A non-trivial amount of those voting do so based on erroneous information or no information at all.
The political alignment of a party changes drastically from location to location.
The relevant political topics changes depending on position, due to federalism.
Not if you consider it the “least crazy” alternative, and with only two parties in your country there doesn’t seem to be much choice.
Upvoted because even if the answer is “no,” it’s still a question worth asking and one that takes courage to ask.
Downvoted because the original post didn’t so much ask a question as make an assertion which I personally didn’t find so valuable. As you point out, why would anyone come here for political discussion in the first place? So I downvoted it, because that’s what the karma system is for. In the end, a karma point is just a karma point. Nothing personal in it.
Nobody seemed to me to suggest there was anything personal in it in the first place, so I have to wonder why you’re giving a disclaimer about something nobody said or seemed to think.
I think it’s a bit silly to call it “courageous” to criticize an online forum. At worst it makes me feel slightly bad when my posts get downvoted as a result. But I appreciate that you are trying to encourage meaningful criticism on Less Wrong, which I feel is badly needed. So thank you for that.
Upvoted for trying to make Less Wrong a better place.
Well said! Well said indeed! And for that I will award you...a karma point!
Wow, this thread has turned out much, much worse than I had expected. There’s a handful of reasonable people around here and for the rest… either people got completely mindkilled or Less Wrong is simply lost as a community. All capital letter rants and insults get upvoted for all to see. Answering questions honestly and politely will get you downvoted. Apparantly I am signalling “tribal affiliation” by saying that racism is bad and using hatred of gay people as an example means I must be a liberal. Saying that I want political parties to be sane apparently means that I am left of centre. I am forced to conclude that the commonly heard critique, that Less Wrong is a cesspool of lunacy, is entirely correct. I feel that the few genuinely rational people on this website, like Eliezer and Yvain and Chronodas and Alicorn and the rest are all drowned out in a sea of crazy.
The fact that people here go apeshit over the barest suggestion of politics doesn’t tell me that it’s impossible to be rational about politics, it tells me that Less Wrong fails the simplest test of rationality. If you can only be rational when you don’t feel any pressure to be irrational, you’re not really rational to begin with.
Oh well, so much for that.
I think I’ve figured out the problem here, and I’m curious to see if I’m right;
In essence, the issue is that people here on LW are very focused on content, while you seem very focused on tone.
An excruciatingly polite and carefully worded post which contains little more information than “X is bad” is simply much less valuable here than an all-caps angry “rant” which explains its reasoning succinctly and logically. Obviously we do value conversational etiquette here, but that is at best a secondary concern; it is much more important what you say than how you say it. Offensiveness of a statement is largely orthogonal to its truth value, so people looking for the truth shouldn’t be afraid to test offensive hypotheses.
I promise you, sincerely, that if you ever decide to post a substantive criticism of Libertarianism/Objectivism, or a defense of your own political views, that I will not upvote or downvote them based on anything other than their factual content and logical cohesion. I also believe, with good reason, that other LW posters will generally do the same.
Thank you for the polite tone and reasonable argument -I do think those things are important- but I disagree with your observation. I mostly notice that I am getting 0 benefit of the doubt from many of the people here. I start a thread offering criticism—people assume I am a troll. I offer hatred of gays as a politics-neutral example of something that’s clearly bad—people assume I am pushing a left wing agenda. I notice that even you seem to believe that my post was about libertarianism/conservatism being bad, even though I have said nothing of the sort.
I think a big part of it is culture clash: I am used to being able to reference things which I perceive as obviously true -such as global warming, evolution and so on- without needing to couch it in disclaimers. However here on Less Wrong, a considerable number of users are American, who are apparently offended by these things and assume I must be trying to troll for a reaction. Compare this with for example the James Randi forums, where it is considered perfectly acceptable to share a laugh about crazy people and politics in the US regardless of whether you are left- or right-wing, and without anyone supposing that you must have a political agenda because of it.
If something seems obviously true to you and someone disagrees, then either they don’t see the obvious truth (i.e. they’re “crazy”) or your views only seem obvious because they are the ones you are familiar with.
Most people throughout history have seen it as perfectly obvious that the sun revolves around the earth, that the world was put together intentionally by some deity or other, and that our standards of behavior could influence the timing and severity of natural disasters. Most people throughout history have also seen it as perfectly obvious that objects fall when you let go of them, that humans reproduce sexually, and that 2+2=4. The only way to distinguish which of our “obvious” beliefs are true and which are false is to try to examine them as objectively as possible, which is where rationality comes in.
Now some beliefs, like those of UFOlogists dowsers and most theistic religions, can be easily debunked by modern science and as such are not relevant to discuss unless new evidence for them emerges. In general, LW is pretty good about not giving these beliefs space; even the occasional bit of snakeoil you see has some fringe science explanation for why it might work.
Others beliefs, like those of environmentalists libertarians and racial realists, have a plausible scientific justification and cannot be dismissed out of hand; these ideas should be debated so that we can determine their truth value. But that doesn’t mean we need to pull punches either; that which can be destroyed by the truth should be, and that destruction goes a lot faster if we expose an idea to light rather than letting it sit unexamined.
As I said before, “[The o]ffensiveness of a statement is largely orthogonal to its truth value, so people looking for the truth shouldn’t be afraid to test offensive hypotheses.” If you see someone articulating a view which seems insane, finding out why they believe it and engaging it is the only path which will lead you to the truth in the long run.
I notice that you are perfectly happy to say that UFOs, dowsers and theistic religions can be debunked and should not be given credence. On the other hand, you say that political ideas have science behind them and should be debated. Do you really believe that there is zero overlap between “political ideologies” and “Beliefs that have no science behind them and should be given no credence”? Do you really believe we live in that perfectly convenient world?
Or, and here is an alternative explanation: Are you perhaps unwilling to judge political ideologies based on the same standards as any other idea that you perceive to be factually wrong purely because you would insult some members of your community in the process? And if that is the case, how is your objection any different from saying that I shouldn’t call a belief in dowsing crazy, because some of the members of Less Wrong believe in dowsing?
(obvious disclaimer: Obviously I’m not saying that factually wrong political beliefs are limited to one “side” of the argument, etcetera etcetera etcetera)
I’d say religion is pretty solidly political, but I admit I did choose my examples a little poorly. I’m sorry if that has led to confusion because it seems this obscured my point.
Of course not. Marxism as an ideology, and most communist/socialist economic policy in practice, go against core principles of economics on which there is strong consensus. Most other advocated forms of theocracy have little theoretical merit either as they are based on easily disproved doctrines, and while they can have solid social organization their inability to adapt technologically leaves them in the dust in the long run. Other utopian societies like the Shakers are not viable as people who don’t reproduce will simply not pass on their culture. Examples abound, it is trivially easy to find them.
But most of the ideas you object to debating here don’t fall into that category. A realistic view of race or a belief in more limited government is supported by mainstream biology/psychometry and economics respectively, and thus shouldn’t be dismissed out of hand. That was a big part of my point; your idea of what is obviously right or obviously wrong is not necessarily accurate, and calling people crazy for believing something without examining the factual basis of that belief is an epistemological mistake.
I hate to do this, but I can’t help but notice that the political ideologies that you say have no credence are left wing. I also notice that the political ideology that you say I can’t dismiss is right wing. I do not see why it should be allowed to call communism unfounded but not to say the same of objectivism, or Laisez fair capitalism (in extreme forms for the latter at least, before someone complains. There’s nothing wrong with moderate socialist policies either, as they work quite well in practice, despite what the US consensus seems to be).
I’d be glad if I could foist theocracies on the left but sadly I think they probably fit better on the right.
Because communism is based on an economic theory which is directly contradicted by the modern economic consensus, and has consistently failed in practice, while the main contention against “Laissez Fair” capitalism from an economic side is where it ignores market failure (which according to mainstream economics happens due to unclear property rights, a major concern of libertarians to say the least).
Essentially, if the issue deals with settled science then the side with the science wins no contest. If there isn’t a consensus, or there is ambiguity with which side the science supports, then there is more than enough room for rational debate.
You have to admit that the oft-held view that the market will solve all problems is a clear example of an irrational viewpoint though. Big-L libertarians at least, who in my experience base their views entirely on ideology, shouldn’t be given any more or less credence than Communists. Saying that everything will work out if everybody would just be nice to each other seems exactly as irrational to me as saying that everything will work out if only 95% of government tasks are eliminated so the free market can take care of everything.
If you disagree, then it really seems to me that you have some strange double standard, where right-wing ideology is somehow subject to different rules than other forms of ideology.
(I don’t think theocracy is necessarily left or right, but nevermind that point)
One of the things you’re going to have to get used to here if you want to stay is that the regulars here have generally given their views some serious thought. If you talk to a libertarian on LW, he will have read up on his Friedman, possibly some Hayek and von Mises too, not to mention having at least an undergraduate level understanding of economics. If you assume they’re using lowest-common-denominator arguments they will assume you are either ignorant of the actual facts of the matter or are being intentionally patronizing.
The same goes for any ideology, right or left; if people talk about it, it will be a fairly nuanced discussion and if you follow it seriously you’ll be up to your eyeballs in PubMed and Jstor papers for the next few hours if not actually buying books. I think that’s why you hardly see anyone advocating theistic religion, capital-c Communism or any other well-debunked ideology; they can’t put up the facts and end up getting downvoted out of existence.
I guess it’s lucky I don’t then; people are just as capable of being irrational about right wing politics as they are about anything else under the sun. Politics brings out the ape in all of us, so it makes sense that there are so many stupid proponents of defensible ideologies.
But, as the local saying goes, ‘reversed stupidity is not intelligence.’ If the dumbest man in the world tells you the sky is blue, he’s not any less right for it. The only way to find out if an argument is right is to examine it for soundness and validity, and until you’ve actually judged it’s merits your opinions are by definition prejudicial.
FWIW, if by “troll” you mean someone who is deliberately looking to start up trouble rather than genuinely trying to engage with the community in some way, I don’t think you’re a troll.
That said, you’re right that I have stopped giving you the benefit of the doubt more generally.
I’m curious: What was your doubt, and in what way did you stop giving me the benefit of it? In what way did I fail to meet your expectations?
In general, I approach people with the assumption (A1) that they are seeking to understand and be understood, to learn from others and to be learned from. I often observe behavior that implies A1 is false, but I make an effort to categorize that behavior as noise rather than signal, both because I think that’s often true, and because interacting under A1 is more pleasant for me.
That’s roughly what I mean by benefit of the doubt, here: assuming A1.
With some people, I eventually drop that assumption. In your case, I stopped believing that you are seeking to understand or to learn; I started believing that you are only here to be understood and learned from.
This changes my interactions. For example, when someone says something I think is false, A1 leads me to explain why I think that is false, and what I think is true instead. Once I stopped assuming A1 in your case, there was no point to making such explanations… I no longer assume that you care what I think about it.
I don’t remember what caused me to drop A1 in your specific case. You’re far from the only person I’ve done this for, and it’s not like I keep records or anything.
In most cases, it’s a series of interactions (both with me and others) where the person seems to engage only with those points they can address while reinforcing their main point, while ignoring or evading or distorting points that would seem to weaken it.
I’m surprised to hear that I give the impression that I’m not willing to learn. As far as I know I have conceded points to others at many points, identified my own faults, and generally been as humble as I can muster. However, I find it interesting that you can’t remember what specifically I did wrong. I think this reinforces the idea that it’s something about my general tone that drives people up the walls. Several others have offered that I would get better support if I didn’t dance around issues and were more blunt about it. Perhaps it is my being careful around what I perceive to be a hot-button topic that is perceived as being disingenuous, and makes people so mad?
On the one hand, I find this idea interesting. On the other hand I can’t believe that a topic called “why republicans generally profess more incorrect viewpoints than democrats” would actually be well received at all, no matter how truthful or well-written.
As a necessary condition, such post would have to contain a list of specific generally Republican beliefs and another list of generally Democrat beliefs. Then a summary of right and wrong points, and then some explanation of why one side lost more points than the other one.
I’m not saying that it would be enough to get the topic upvoted. But it is the minimum necessary to discuss the topic rationally.
This is the flavor of discussion LessWrong culture prefers. Saying “Republicans believe X, Y, Z. X, Y, Z is false. Therefore Republican beliefs are false. To compare, Democrats believe in Q, which is true.” could perhaps be acceptable in some context. Because it allows a debate about fact. -- Is it true that Republicans generally believe X? Is X really false? Is Q really true? Is Q a reasonable analogy for X in its role in given party’s belief-set? -- These things can perhaps be discussed reasonably. Or at least something interesting can be said about them.
You can say bluntly “X is false”. You can say bluntly “most of Republicans believe X”, although it would be better to also provide a hyperlink to an opinion poll or something. -- The problem is speaking bluntly about wide generalizations. The key is to be specific.
Criticism of Republicans is a smaller violation of local norms than making a general claim about unspecified things. I guess in your mind there are specific examples of beliefs that Republicans are wrong about. So say it. If you can’t say the specific things, then don’t say the generalization. It’s not because we don’t want to hear generalizations about Republicans; it’s because we don’t want to hear generalizations without examples regardless of the topic, but we are more sensitive about it in political topics, because there it happens too often.
Okay, this explanation is not completely correct… some readers would object against political discussion of any kind. The important thing is to realize that you have violated the local norms in aspects more grave than merely criticizing a specific political party. And you seem to be blind about this.
I notice that everyone who disagrees with me here seems to be supremely confident that I am wrong, that Less Wrong doesn’t have the flaws I see, and that I must just be blind because I can’t see how wrong I am. I wonder how many have actually stopped and considered whether or not I might have a point, and I wonder how many are so confident that I am wrong merely because everyone else seems so confident that I am wrong.
How sure are you really that criticism of republicans and libertarians is not the issue? That is to say, how sure are you that I would have received the same reaction if I instead had written about things I thought democrats do wrong? I am asking this because I never see someone couch their criticism of left-wing viewpoints by saying “well of course republicans have their faults too”. Did you ever see anyone here say that communism doesn’t work, and then get in trouble for it because they weren’t specific enough?
Don’t get me wrong, I am not saying that left-wing people are magically better people, or some nonsense like that. I am saying that people naturally tend to be more defensive and insecure about beliefs that are ill-supported by evidence. You would in fact receive the same reaction if you told a communist that their beliefs are factually incorrect. They would say that you are just blind and that you can’t see that you are biased etcetera etcetera. The difference being that communism is already considered crazy here, just as religion is considered crazy, so it is okay to criticize those things by Less Wrong norms.
Again, not saying that all beliefs held by republicans are crazy, or all Democratic beliefs are right, etcetera etcetera obvious disclaimer. I don’t actually want to have a discussion about what most republicans believe, since that isn’t helpful: I just want to be able to say “X is false”.
I actually agree with most of your post though. Believe it or not I am not some kind of crazy extreme moderate whose views you can never change. In fact I agree that my biggest fault was that in the OP, I said that the other party is “kind of crazy” instead of “Holds beliefs that are kind of crazy” and edited it accordingly. It didn’t receive any less hate after editing it though, so I don’t think it helped. I suspect that at least a large part of it is that people generally enjoy being offended. It gives you that nice feeling of righteous indignation. I think this is what makes writing a post that doesn’t offend anybody is so hard.
I think that “The Non-Libertarian FAQ”, although not published in LW, is popular here. If we all could debate politics on this level, we probably wouldn’t need the norm against discussing politics.
I don’t have a similar example for Republicans, but I guess most of them would be offended by reading LW opinions about religion.
I haven’t yet met a communist who would react to criticizing their beliefs by saying: “please give me some specific evidence”. They usually react exactly the opposite way; the experimental evidence is the last think they would want to discuss; it’s the great idea that matters and there is no need to learn from history, because next time it will magically work perfectly. And those are the more sane among them; the less sane will say that all evidence is just American propaganda, including the things I have seen with my own eyes as a child. (There are many communists in my country, so it is not difficult to meet enough samples.)
Without saying which beliefs specifically you mean, this is not an improvement. Okay, I guess it is a small move towards politeness, but not towards fact-based discussion.
This is true in general, but this is not the main problem with your article. If you think it is, your model of LessWrong is incorrect.
Perhaps. I find it unlikely, personally, but I’m hardly an expert.
Do you? Why?
I agree that a topic titled that would be poorly received, and would have to be exceptionally well constructed to be considered valuable. I think there are people who could write such a post, were they motivated to… Yvain comes to mind… but I don’t think I could, and I don’t think you could.
Sure. But that’s a false dichotomy: an excruciatingly polite and carefully worded post which explains its reasoning succinctly and logically is also possible and would be even better.
Agreed, though I would also endorse establishing a culture where a post that explains its reasoning succinctly and logically without devoting additional care to being excruciatingly polite would be considered better than all of the above.
I think my following sentence handled that nicely; etiquette is a concern, but a secondary one.
If your goal is good epistemology, avoiding offense should always lose to making accurate statements when the two conflict. That is almost tautologically true, yet still useful to keep in mind since the two often are at odds with one another.
FWIW, I agree with you that LessWrong ought not discuss partisan-politically charged topics, and have held this position from long before your dramatic arrival.
Granted, I would have probably phrased the position more neutrally, something like “The local rationality level isn’t reliably high enough to support productive discussions of such topics, especially given the sorts of people who will show up here if such discussions are common,” rather than something like “LessWrong is such an irrational cesspool of lunacy that, unlike the rational and reasonable people who frequent other Internet forums, it is incapable of discussing politics without going apeshit and thereby drowning in a sea of crazy.”
But it amounts to the same thing from a policy perspective.
I do find it hysterically funny that someone who chooses the second formulation over the first is also devoting so much energy to policing our tone, though it’s not really surprising.
I will note that the second formulation was the result of having spent a significant amount of time and effort attempting to present my viewpoints in as reasonable a way possible to people who would evidently rather characterize me as a political agent pushing a left-wing agenda. A certain amount of frustration played a part.
Anyway, you misread my conclusion. The point is not that Less Wrong ought not to discuss partisan-politically charged topics, I am saying that the fact that the barest mention of politics causes all rationality to fly out of the window means that Less Wrong failed in its objective. People here aren’t any better at discussing emotional topics than on other forums on the internet. (I am now tempted to post a thread on JREF criticizing their tendency to call everything they disagree with “woo”, including transhumanism, just to see if I get the same reaction as here on Less Wrong)
Yes, I understand that you only chose to be insulting and rude to us because you were frustrated with us.
I think it likely that the people you are chastising for being insulting and rude to you were frustrated with you, as well.
Yes, I understand that this is your main point.
I think there is a very big difference between criticizing Less Wrong as a community -which I did- and lambasting an individual -which I am-. If criticism of a community is taken as a personal insult -and it appears to be-, then that is a perfect example of everything that I perceive to be wrong with said community. Note that as a member, I am part of this community, regardless of how much some people would rather see me gone. This discussion really shouldn’t be about us versus them, or Less Wrong versus Sophronius in this case.
I agree that there’s a big difference between criticizing a community and criticizing an individual.
I agree that you did criticize LessWrong as a community.
I agree that you are an individual.
I did not say that criticizing the community was taken as a personal insult, merely that the people who were insulting and rude to you were frustrated with you.
I doubt that most of our frustration with you is because we treat your criticisms of the community as a personal insult.
I am unsurprised by your conclusions.
This is the part I was referring to. I criticized Less Wrong. You interpret this as “being rude to us”. This is party politics, plain and simple, where Less Wrong is the party. I did not criticize you nor was I rude to you, so you should not say “us”.
OK. Thanks for the clarification.
Selection bias—maybe there are sane people on LW but they just aren’t commenting in this thread.
Yes, thank you, that’s very possible. In fact, I have also considered that my own tone/style of writing attracts the wrong kind of people. Maybe I am too confrontational somehow? It is also possible that the topics I choose just make rational people go “no good can come of this” and leave. But the sheer amount of glee which some people here seem to take in ruining this thread (see Lumifer’s post below for example) tells me that there is at least a significant number of people here who don’t care about behaving rationally/optimally/helpfully.
Care to take me up on my previous offer and define what you mean by “racism”?
For your convenience here are some common definitions I’ve seen used:
1) Someone who believes or alieves that there are significant differences between people of different races, regardless of whether he has a rational basis for these beliefs/aliefs.
2) Someone who believes or alieves that there are significant differences between people of different races without a rational basis for these beliefs/aliefs.
And for the sake of completeness, here is the “definition” that appears to be frequently used by the “Social Justice” community (note, I’m not saying I suspect you’re using this definition):
3) I refuse to provide a definition, but all white people are racist and have a duty to feel guilty about this fact.
If I try to come up with an example of something that should be agreed upon by any political party, and I come up with “racism is bad”, why would you feel the need to question what I mean by that? Am I really wrong in saying that racism being bad is something that everyone should agree with? Does this really need to be turned into another political argument?
But fine, I will give you the benefit of the doubt and assume that you are genuinely curious, and point out that yes, I mean that arbitrary racism is bad (definition 2). No, hiring black people for movies when you want someone with black skin for the role is not racism. Yes, the black activist who yells that only white people are racist is racist himself. Yes, I agree that one shouldn’t persecute people merely for having the belief that there are differences between races. Yes, I am aware that some people feel that science on race should be banned (I know some) and no that’s not something I agree with. When I said “racism is bad” I did not want to discuss any of these more complex issues however, I meant it in the really really obvious way.
Because the phrase “racism is bad” is shorthand that covers a large variety of beliefs. hiding differences in belief with highly emotionalized language. “Racism is bad” is easily used to require or prohibit affirmative action, or require or prohibit drug laws, require or prohibit gun laws, and require or prohibit several forms of economic thought, all without changing the phrase.
I share a position very much similar to the one you’ve listed, but I’ve seen people who list the same positions yet feel that they’re better served in such by the Republican Party than the Democratic Party, and given some of the response to Schuette, I can’t say I can’t understand them.
What about someone who believes that people of certain races are more likely to commit crimes, and thus votes for parties that are in favor of limiting immigration?
Oh, boy. Did you just rage-quit? Can I have your stuff? :-D
-- Stephen Colbert