I like Less Wrong—there are courtesy rules here which keep it from going wrong in ways which are common in SJ circles. People get credit for learning rather than being expected to get everything right, and it’s at least somewhat unusual to attack people for having bad motivations.
This being said, there are squicky features here, and I’m not just talking about claims that women are different from men—oddly enough, it generally (always?) seems to be to women’s disadvantage, even though there’s some evidence that women are more trustworthy at running banks and investment funds.
I tolerate posts like this, but LW would seem like a friendlier place (to me) and possibly even be more rational if articles about gender issues would take utility for men and women equally seriously.
Reactionaries had something of a home here—less so after the formation of More Right, I think. I haven’t seen evidence of anything especially extreme on the egalitarian side, though there might be as good a rationalist case to be made for thorough reparations. Now that I think about it, I haven’t even seen a case made for strong economic support for intelligent poor children.
Trolley problems..… I keep getting an impression that the point is that people don’t have enough inhibitions against killing for the greater good. (By the way, how easy do you think it would be to move an unwilling person who weighs a good bit more than you do?)
And torture seems to be taken too lightly. It’s a real world problem, not just a token to be passed around in arguments.
What the original post made me realize is that what I consider most certain to be valuable at LW is the instrumental rationality material, and it would be a good thing for there to also be an online site for instrumental rationality without the “let’s do low-empathy discussions to prove how rational we are” angle.
It’s funny, I am totally sympathetic to everything you wrote here, yet all I can think is, “my daily life is chock full of people incapable of grappling with trolley problems or discussing torture concretely, why are you trying to make LessWrong more like real life?”
This encourages me to think more about just what I was proposing....
A lot of what I was trying to do was demonstrate that I think the writer of the original link has a point. This is not quite the same thing as a call for action, even though I’d be happier without the trolley problems.
Another angle I was taking was that LW is theoretically open-minded, but is actually much more hospitable to some sorts of radical low-empathy ideas than others.
What I think is more feasible than changing LW (which is not to say very feasible) would be an empathy-tilted rationalist blog. It might be an independent development or started by disaffected LWers.
Have a probably empathic idea: HBD focuses on IQ, but there’s little or no discussion of the possibility of tech for raising IQ from 90 or so to 110, even though that would make a large positive difference.
Meanwhile, I’ll mention Hillary Rettig, a progressive who’s good on instrumental rationality.
Are you talking about raising the IQ of a person, or the average IQ of a population? There’s little discussion of the former because decades of failed interventions has made “you can’t raise an existing person’s IQ reliably” the default hypothesis. Once you’ve got past the easy childhood stuff like nutrition, lead paint and iodine deficiencies, there’s not a lot you can do. Aside from some kind of Black Swan like a pill that raises you up a standard deviation, there’s not much room for hope.
Raising the IQ of the next generations though, there’s discussion on that since all the theory deems it totally possible. See here for example.
But yes, in absolute terms there’s little discussion on how to solve the problem. Many writers assume the problem is politically intractable.
There’s a good amount of interest in eg r/nootropics and Gwern has written about the possible benefits of supplementing local water supplies and whatnot. Part of the problem is that the solutions are political complex since they involve A) convincing sufficient people IQ is really a thing and then B) getting large groups of people to admit they’re dumb and want their children to be smarter. In terms of technical solutions we’re just not there cybernetically yet I don’t think. Genetic solutions have the whole eugenics problem to contend with though china seems to be working on it regardless.
Have a probably empathic idea: HBD focuses on IQ, but there’s little or no discussion of the possibility of tech for raising IQ from 90 or so to 110, even though that would make a large positive difference.
What do you mean by this? Technology that raises IQ in the next generation or the existing people. The latter is far from our abilities, the former would not help us at all when it comes to the perception of eugenics friendliness.
Have a probably empathic idea: HBD focuses on IQ, but there’s little or no discussion of the possibility of tech for raising IQ from 90 or so to 110, even though that would make a large positive difference.
I seem to remember reading somewhere, I think it was something Daniel Dennett said, about the value of having philosophers willing to explore ideas that are (and maybe should be) taboo for ordinary people.
Take Peter Singer, for example. I don’t buy the whole standard consequentialist package in ethics, but I really like Peter Singer. And he says things that are really shocking to many people, for example arguing that infanticide is often morally OK. But I suspect being willing to consider shocking ideas like that may be a prerequisite for being able to make progress on certain really important topics (see Singer’s ideas about animal rights, charity, and some areas of medical ethics). Not everyone needs to be Peter Singer, but having a few Peter Singers—even a whole blog community of them—seems really valuable.
A couple other points: on torture, I don’t think it’s exactly being taken lightly. Rather, I suspect the reason it’s used as an example is precisely because it an archetypal example of a really horrible thing.
As for seeming un-empathic, I don’t think it’s just rationality signaling. There’s an issue that when you’re making decisions that effect huge numbers of people, being too driven by your feelings about one case can lead to decisions that are really bad for the other people involved and that you wouldn’t make if you really thought about it.
LW historically has had a habit of choosing examples with shock value beyond what’s necessary to make the point; granted, this no longer seems quite so fashionable for new top-level content, but it does remain noticeable in comments and in older posts, including parts of the Sequences. I view this habit as basically a social display: a way of signaling “I can handle this without getting mind-killed”. Now, let me be very clear: I do not regard this as intrinsically destructive, nor do I place substantial terminal value on avoiding offense. But I do think its higher-order effects have avoidably reduced the quality of discussion here.
The fundamental issue is that not everyone here is equally able to avoid derailing discussions when exposed to topics like, say, torture. Even people who are generally very rational may find particular subjects intolerable; judging from experience, in fact, I’d say that most of the people here have one or two they can’t handle, including myself. Avoiding these is part of our culture when they overlap with talking points in mainstream politics, and that’s good; but there remains a wide scope of weakly politicized yet potentially mindkilling ones out there, many of which we’ve historically thrown around with the gleeful abandon of a velociraptor plunging into a vat full of raw meat.
I think we should stop doing that, at least to the extent that we avoid conventional politics and for most of the same reasons.
Torture is a uniquely good tool in thought experiments, when you need something bad, and I refuse to give it up.
Death is too complicated (and therefore invites too much hypothetical-fighting). There’re questions of what quality of life you’re missing, how long you would have lived, etc, and worse yet, some people think it’s a good thing. No one* thinks torture (of the average person) is a good thing. When people say things like “I want to go on living no matter what my life is like” the only correct answer is extremely unpleasant experiences, which are also called torture. I could wrap the idea of torture in a bunch of sterile-sounding abstractions, but no one likes obfuscation, and it would still be torture. If leaving out the word “torture” changes their reaction, then including it is necessary to make my point. Anything else equivalently bad that could do the job in my thought experiment would probably be some more specific thing than torture, or disturb people as much as torture anyway.
(*Colloquial sense of “no one”)
When I need to make an argument about factory farming, and I want to draw an accurate analogy, I need to bring up torture, because that is an accurate description of what actually happens in factory farms. It’s not just the death in them that bothers me. Indeed, to counter the Robin Hanson argument that meat is moral, references to actual torture are the only answer (linked to cache version because as of writing this the page is down).
When I am arguing with a theist, and I need to sidestep their cached thought that people in Hell deserve it, I have to use the word torture, because that is a boo-light, and i am fully justified in using it because torture is what we’re talking about.
If you can’t discuss these things with me, that is too bad. Children likely have valuable insights that adult conversions are missing due to their absence, but I am still gonna talk about these things. So if you must leave the room while the grown-ups are talking, then go. Grown-ups’ conversations are important, and making everything kid-friendly is not an improvement (This is also my response to the entire essay that started this thread).
I have always seen LessWrong as a place for grown-ups. An almost-grown-up can gain a lot by jumping into the grown-ups’ conversation instead of talking with kids, but the real grown-ups still need to talk about real grown-up things.
As for your fashionable signaling hypothesis for jarring and vivid examples, as Lumifer pointed out, you just did it yourself. Were you signaling then? I bet not; I bet you forgot that “meat” is a disturbing mind-killer to some people, and when the idea popped into your mind, you thought “that feels like it makes my point well, and sounds kind of amusing,” so you wrote it. If I told you to watch your thought experiments and examples and not bring up meat because it might drive people off, you would probably think (and be right) that that is too much effort on behalf of too small a population, if people were socially expected to watch what they said all the time like that it would make posting less enjoyable. The feeling of being made to act in a kid-friendly way is not a good one.
I don’t like being around literal kids because (among other things) people expect me not to swear around them (Also partly because people expect me to not tell them that Santa isn’t real, etc). And not being able to swear is frustrating. This is the same feeling that the policy you’re advocating will impose on the rest of LessWrong who are not psychologically scarred.
I expect you’re thinking, “Yeah, but like I said, there are lots of potential mindkillers, and lots more than a small minority are mindkilled by at least some of them. It doesn’t have to be the same mindkiller that kills every mind.” But either handling your personal mindkillers, or at least just quietly sitting out and not making a fuss while other people talk about them is the price you pay for sitting at the grown-ups table, and in return you don’t have to be super-careful about stepping on everyone else’s toes.
I’m fascinated, because those are not at all the sorts of mentions of torture that bother me—what gets to me is the tortures vs. dust specks and “is that worth fifty years of torture?, what if the person is memory-wiped afterwards?” discussions.
Those do mind-kill me, and I pretty much don’t read them.
But either handling your personal mindkillers, or at least just quietly sitting out and not making a fuss while other people talk about them is the price you pay for sitting at the grown-ups table, and in return you don’t have to be super-careful about stepping on everyone else’s toes.
Generally speaking, it’s not my personal mindkillers that I’m trying to avoid; I do have some, but they aren’t the ones I mentioned and I know well enough to leave them alone. Nor do I much care about the occasional isolated outburst from someone else that I can downvote and ignore. It’s the thousand-post threads that could have been summarized without loss of generality in ten good ones. It’s the extended bouts of ideological angst that recur every few months without bringing up any new information. It’s a community phenomenon, not a personal one.
Meat used as part of a throwaway metaphor doesn’t trigger that sort of thing, as evidenced by the fact that I am not now defending myself against a howling mob. (Incidentally, neither does death as such; it’s too abstract.) Torture used as part of an extended thought experiment, without hemming it in plenty of obligatory hand-wringing, does. So do a number of other things that I’m sure you can remember from experience. I’m not trying to suggest a precautionary principle here; I hate those things and I’m sure you do too. But we do have that experience to draw on, and it now seems to me that persisting in the use of language and concepts we know that we as a community can’t handle in an adult manner is symptomatic of either gluttony for punishment, of bloody-mindedness to the point of pathology, or of some truly outstanding cluelessness.
I’d like it too if LW could reliably be treated as the grown-ups’ table. But that isn’t the world we live in.
Meat used as part of a throwaway metaphor doesn’t trigger that sort of thing.
When I hear the word, 1-3 images of tortured animals usually briefly cross my mind, and I know there are much more emotional vegans than me. Speak for yourself. And even if torture mindkills some people, like I said, you can’t properly discuss some important topics without it, so if it spawns 1000-post threads that aren’t worth reading, too bad. (When I’ve used torture thought experiments so far, it hasn’t.)
Edit: Actually, I probably would stop talking about torture if every time I did it spawned a1000-post thread that wasn’t worth reading. But if that was what LessWrong was like, I would probably leave. Or if it was in every other respect the same (an implausible counterfactual), stay, but not enjoy it nearly as much.
Torture as an example seems like a bad idea in the same way that the Reagan/Quaker/Pacifist question is a bad example—it’s political, it draws one’s attention away from the actual argument.
I tolerate posts like this, but LW would seem like a friendlier place (to me) and possibly even be more rational if articles about gender issues would take utility for men and women equally seriously.
That post is by GLaDOS, who is female. I doubt GLaDOS values women less than men, but it would be nice if you would actually make a case for your insult/accusation rather than just throwing it in without any discussion.
It seems clear to me that the post was not about weighing the pros and cons of divorce in total (something which would take a lot more than a short post). The post makes a more abstract point about the way incentive changes can have large impacts even without people coordinating to deliberately change behavior. That seems like a very appropriate topic for Less Wrong.
I believe that the “problem” is that Lesswrong loves contrarians.
If a smart-sounding article espousing conservative opinions on social issues appears, most lesswrongers will disagree but be interested in reading anyway because it’s novel and there is a dearth of smart conservative opinions in the world, and the exciting chance to “actually change their mind” looms.
If a smart-sounding article espousing liberal opinions on social issues appears, most lesswrongers will agree but be disinterested in reading because they’ve heard it all before, and it’s preaching to the choir, and it’s political and mind-killing, etc.
This reversal of traditional attitudes to disagreement has its merits, but we’re seeing the downsides too. (one of the many reasons I advocate having separate feedback buttons for agreement, interest, and quality assessment)
Doesn’t this problem gradually fix itself? For example, at the beginning I was interested in Moldbug’s articles, but these days I just consider them boring. I have already heard the big picture; there is now nothing new, just reiterating what was already said; the lack of evidence or even clear explanations is very annoying, and I have already given up hope that it could be improved.
These days, if someone says something seemingly smart like “Cthulhu always swims left”, my first though is: give me a definition of what the hell do you even mean by this, then give me an evidence that it really happens, and if you don’t give any of it (which is my expectation based on previous experience) then just please shut up because you’re wasting my time.
Speaking for myself, the neo-reactionaries had their chance (which I consider to be a good thing—because I learned a few interesting things), and they wasted it.
I’m not sure, but my personal experience does mostly mirror yours. LW is not a stable group, though—there’s a cycle of users entering and leaving, and the total number of active people at any given time is quite small.
I tolerate posts like [More ominous than a (marriage) strike], but LW would seem like a friendlier place (to me) and possibly even be more rational if articles about gender issues would take utility for men and women equally seriously.
Well, since that post quotes liberally from a “manosphere” website, you’d be justified for assuming that it does take men’s welfare more seriously than women’s. But for what it’s worth, it’s mostly concerned with trying to predict men’s strategically reasonable response to a change in institutions, and determining the resulting equilibrium. Whether you value men’s and women’s welfare equally doesn’t much affect how bad the projected outcome is.
I keep getting an impression that the point is that people don’t have enough inhibitions against killing for the greater good …
Why? A standard result in the trolley-problem literature is that folks deviate from utilitarian ethics in a way that’s suggestive of just such a moral injunction. People on LW are different, in that they tend to be highly committed to utilitarianism. But we already knew that—the way trolley problems are discussed here is just more evidence of this fact.
Now that I think about it, I haven’t even seen a case made for strong economic support for intelligent poor children.
Does there need to be a case made for that? This seems like one of the earliest identified reasons for redistributing wealth. You had people and organizations sponsoring poor talented youth and this being considered virtuous since ancient Greece. And the reform of education and welfare in the 19th and 20th century often emphasized this example, thought they may not have always done much about it.
In Slovenia at least we have scholarships handed out to people who preform very well on aptitude tests, is this something that doesn’t happen as reliably in the US?
In Slovenia at least we have scholarships handed out to people who preform very well on aptitude tests, is this something that doesn’t happen as reliably in the US?
Several states have merit-based scholarships (though these usually require performance in classes as well as aptitude tests, so there is a conscientiousness element as well as an intelligence element). I myself am going to university on a Bright Future scholarship. However, my impression is that federal need-based aid is a lot more common than state merit-based aid.
Trolley problems..… I keep getting an impression that the point is that people don’t have enough inhibitions against killing for the greater good.
I don’t really like trolley problems either, but I don’t think they can be waved away. When programming a self-driving car’s decision algorithm for reacting when a car full of people skids in front of it while there is a single pedestrian on the sidewalk where it would have to swerve, you are essentially dealing with a real -world trolley problem.
Better to hit the other car rather than the pedestrian. The people in the car are protected by a lot of metal and will tend to suffer much less damage.
I think a lot of the focus on trolley problems is they’re sort of a platonic model of making hard decisions about tradeoffs, with the idea being that if you can convince people it’s right to make tradeoffs in the most obvious situation, they should consider the tradeoffs in much more complicated policy decisions also. EG people who propose Basic Income want people to be willing to trade “some of your money” for “greater happiness for many people”. This is also what a lot of Effective Altruism movement is based on, making GOOD tradeoffs rather than bad ones.
Your examples remind me of this thread on suicide, which is the most distressing thing I’ve read on less wrong. (Though it is not exactly an example of “low empathy.”)
I thought it could actually encourage another suicide. Suicidal feelings are ubiquitous, and the actual act is not that uncommon. When it’s committed it’s often a very disproportionate response to misfortune, or even to a mad self-hating inner monologue. I think the sober discussions that take place here about when misogyny, murder, or torture are warranted are mostly in bad taste, but I find it implausible that they will cause a harm worse than offense. Not so for a sober discussion of when suicide is warranted (even though it is not “offensive” in the same sense!).
Thank you. Well, no wonder I was puzzled, since this sentiment is thoroughly alien to me. When I hear of a suicide, my first thought and feeling are: “I’m glad his/her suffering is over”.
I’m surprised you call it “thoroughly alien,” I would have thought my position is thoroughly non-weird, even cliched. You really think nobody ever made a mistake by killing themselves? I won’t try to tell you otherwise, but you must know that that’s not a typical opinion.
I like Less Wrong—there are courtesy rules here which keep it from going wrong in ways which are common in SJ circles. People get credit for learning rather than being expected to get everything right, and it’s at least somewhat unusual to attack people for having bad motivations.
This being said, there are squicky features here, and I’m not just talking about claims that women are different from men—oddly enough, it generally (always?) seems to be to women’s disadvantage, even though there’s some evidence that women are more trustworthy at running banks and investment funds.
I tolerate posts like this, but LW would seem like a friendlier place (to me) and possibly even be more rational if articles about gender issues would take utility for men and women equally seriously.
Reactionaries had something of a home here—less so after the formation of More Right, I think. I haven’t seen evidence of anything especially extreme on the egalitarian side, though there might be as good a rationalist case to be made for thorough reparations. Now that I think about it, I haven’t even seen a case made for strong economic support for intelligent poor children.
Trolley problems..… I keep getting an impression that the point is that people don’t have enough inhibitions against killing for the greater good. (By the way, how easy do you think it would be to move an unwilling person who weighs a good bit more than you do?)
And torture seems to be taken too lightly. It’s a real world problem, not just a token to be passed around in arguments.
What the original post made me realize is that what I consider most certain to be valuable at LW is the instrumental rationality material, and it would be a good thing for there to also be an online site for instrumental rationality without the “let’s do low-empathy discussions to prove how rational we are” angle.
It’s funny, I am totally sympathetic to everything you wrote here, yet all I can think is, “my daily life is chock full of people incapable of grappling with trolley problems or discussing torture concretely, why are you trying to make LessWrong more like real life?”
This encourages me to think more about just what I was proposing....
A lot of what I was trying to do was demonstrate that I think the writer of the original link has a point. This is not quite the same thing as a call for action, even though I’d be happier without the trolley problems.
Another angle I was taking was that LW is theoretically open-minded, but is actually much more hospitable to some sorts of radical low-empathy ideas than others.
What I think is more feasible than changing LW (which is not to say very feasible) would be an empathy-tilted rationalist blog. It might be an independent development or started by disaffected LWers.
Have a probably empathic idea: HBD focuses on IQ, but there’s little or no discussion of the possibility of tech for raising IQ from 90 or so to 110, even though that would make a large positive difference.
Meanwhile, I’ll mention Hillary Rettig, a progressive who’s good on instrumental rationality.
Are you talking about raising the IQ of a person, or the average IQ of a population? There’s little discussion of the former because decades of failed interventions has made “you can’t raise an existing person’s IQ reliably” the default hypothesis. Once you’ve got past the easy childhood stuff like nutrition, lead paint and iodine deficiencies, there’s not a lot you can do. Aside from some kind of Black Swan like a pill that raises you up a standard deviation, there’s not much room for hope.
Raising the IQ of the next generations though, there’s discussion on that since all the theory deems it totally possible. See here for example.
But yes, in absolute terms there’s little discussion on how to solve the problem. Many writers assume the problem is politically intractable.
I was talking about raising the IQs of large numbers of existing people.
My impression is that there just isn’t much interest is looking for physical solutions.
Compare the amount of interest in combating obesity to the amount of interest in becoming more intelligent.
There’s a good amount of interest in eg r/nootropics and Gwern has written about the possible benefits of supplementing local water supplies and whatnot. Part of the problem is that the solutions are political complex since they involve A) convincing sufficient people IQ is really a thing and then B) getting large groups of people to admit they’re dumb and want their children to be smarter. In terms of technical solutions we’re just not there cybernetically yet I don’t think. Genetic solutions have the whole eugenics problem to contend with though china seems to be working on it regardless.
What do you mean by this? Technology that raises IQ in the next generation or the existing people. The latter is far from our abilities, the former would not help us at all when it comes to the perception of eugenics friendliness.
What does [Edit: raising IQ] have to do with HBD?
If IQs can be raised then one aspect of HBD becomes less important.
Depends on how hard it is and whether how far they can be raised depends on their “natural” value.
Re: trolley problems and torture:
I seem to remember reading somewhere, I think it was something Daniel Dennett said, about the value of having philosophers willing to explore ideas that are (and maybe should be) taboo for ordinary people.
Take Peter Singer, for example. I don’t buy the whole standard consequentialist package in ethics, but I really like Peter Singer. And he says things that are really shocking to many people, for example arguing that infanticide is often morally OK. But I suspect being willing to consider shocking ideas like that may be a prerequisite for being able to make progress on certain really important topics (see Singer’s ideas about animal rights, charity, and some areas of medical ethics). Not everyone needs to be Peter Singer, but having a few Peter Singers—even a whole blog community of them—seems really valuable.
A couple other points: on torture, I don’t think it’s exactly being taken lightly. Rather, I suspect the reason it’s used as an example is precisely because it an archetypal example of a really horrible thing.
As for seeming un-empathic, I don’t think it’s just rationality signaling. There’s an issue that when you’re making decisions that effect huge numbers of people, being too driven by your feelings about one case can lead to decisions that are really bad for the other people involved and that you wouldn’t make if you really thought about it.
What do you have against passing real world problems around as tokens in arguments?
LW historically has had a habit of choosing examples with shock value beyond what’s necessary to make the point; granted, this no longer seems quite so fashionable for new top-level content, but it does remain noticeable in comments and in older posts, including parts of the Sequences. I view this habit as basically a social display: a way of signaling “I can handle this without getting mind-killed”. Now, let me be very clear: I do not regard this as intrinsically destructive, nor do I place substantial terminal value on avoiding offense. But I do think its higher-order effects have avoidably reduced the quality of discussion here.
The fundamental issue is that not everyone here is equally able to avoid derailing discussions when exposed to topics like, say, torture. Even people who are generally very rational may find particular subjects intolerable; judging from experience, in fact, I’d say that most of the people here have one or two they can’t handle, including myself. Avoiding these is part of our culture when they overlap with talking points in mainstream politics, and that’s good; but there remains a wide scope of weakly politicized yet potentially mindkilling ones out there, many of which we’ve historically thrown around with the gleeful abandon of a velociraptor plunging into a vat full of raw meat.
I think we should stop doing that, at least to the extent that we avoid conventional politics and for most of the same reasons.
...
:-D
Torture is a uniquely good tool in thought experiments, when you need something bad, and I refuse to give it up.
Death is too complicated (and therefore invites too much hypothetical-fighting). There’re questions of what quality of life you’re missing, how long you would have lived, etc, and worse yet, some people think it’s a good thing. No one* thinks torture (of the average person) is a good thing. When people say things like “I want to go on living no matter what my life is like” the only correct answer is extremely unpleasant experiences, which are also called torture. I could wrap the idea of torture in a bunch of sterile-sounding abstractions, but no one likes obfuscation, and it would still be torture. If leaving out the word “torture” changes their reaction, then including it is necessary to make my point. Anything else equivalently bad that could do the job in my thought experiment would probably be some more specific thing than torture, or disturb people as much as torture anyway.
(*Colloquial sense of “no one”)
When I need to make an argument about factory farming, and I want to draw an accurate analogy, I need to bring up torture, because that is an accurate description of what actually happens in factory farms. It’s not just the death in them that bothers me. Indeed, to counter the Robin Hanson argument that meat is moral, references to actual torture are the only answer (linked to cache version because as of writing this the page is down).
When I am arguing with a theist, and I need to sidestep their cached thought that people in Hell deserve it, I have to use the word torture, because that is a boo-light, and i am fully justified in using it because torture is what we’re talking about.
If you can’t discuss these things with me, that is too bad. Children likely have valuable insights that adult conversions are missing due to their absence, but I am still gonna talk about these things. So if you must leave the room while the grown-ups are talking, then go. Grown-ups’ conversations are important, and making everything kid-friendly is not an improvement (This is also my response to the entire essay that started this thread).
I have always seen LessWrong as a place for grown-ups. An almost-grown-up can gain a lot by jumping into the grown-ups’ conversation instead of talking with kids, but the real grown-ups still need to talk about real grown-up things.
As for your fashionable signaling hypothesis for jarring and vivid examples, as Lumifer pointed out, you just did it yourself. Were you signaling then? I bet not; I bet you forgot that “meat” is a disturbing mind-killer to some people, and when the idea popped into your mind, you thought “that feels like it makes my point well, and sounds kind of amusing,” so you wrote it. If I told you to watch your thought experiments and examples and not bring up meat because it might drive people off, you would probably think (and be right) that that is too much effort on behalf of too small a population, if people were socially expected to watch what they said all the time like that it would make posting less enjoyable. The feeling of being made to act in a kid-friendly way is not a good one.
I don’t like being around literal kids because (among other things) people expect me not to swear around them (Also partly because people expect me to not tell them that Santa isn’t real, etc). And not being able to swear is frustrating. This is the same feeling that the policy you’re advocating will impose on the rest of LessWrong who are not psychologically scarred.
I expect you’re thinking, “Yeah, but like I said, there are lots of potential mindkillers, and lots more than a small minority are mindkilled by at least some of them. It doesn’t have to be the same mindkiller that kills every mind.” But either handling your personal mindkillers, or at least just quietly sitting out and not making a fuss while other people talk about them is the price you pay for sitting at the grown-ups table, and in return you don’t have to be super-careful about stepping on everyone else’s toes.
By the way I didn’t downvote you.
I’m fascinated, because those are not at all the sorts of mentions of torture that bother me—what gets to me is the tortures vs. dust specks and “is that worth fifty years of torture?, what if the person is memory-wiped afterwards?” discussions.
Those do mind-kill me, and I pretty much don’t read them.
Generally speaking, it’s not my personal mindkillers that I’m trying to avoid; I do have some, but they aren’t the ones I mentioned and I know well enough to leave them alone. Nor do I much care about the occasional isolated outburst from someone else that I can downvote and ignore. It’s the thousand-post threads that could have been summarized without loss of generality in ten good ones. It’s the extended bouts of ideological angst that recur every few months without bringing up any new information. It’s a community phenomenon, not a personal one.
Meat used as part of a throwaway metaphor doesn’t trigger that sort of thing, as evidenced by the fact that I am not now defending myself against a howling mob. (Incidentally, neither does death as such; it’s too abstract.) Torture used as part of an extended thought experiment, without hemming it in plenty of obligatory hand-wringing, does. So do a number of other things that I’m sure you can remember from experience. I’m not trying to suggest a precautionary principle here; I hate those things and I’m sure you do too. But we do have that experience to draw on, and it now seems to me that persisting in the use of language and concepts we know that we as a community can’t handle in an adult manner is symptomatic of either gluttony for punishment, of bloody-mindedness to the point of pathology, or of some truly outstanding cluelessness.
I’d like it too if LW could reliably be treated as the grown-ups’ table. But that isn’t the world we live in.
When I hear the word, 1-3 images of tortured animals usually briefly cross my mind, and I know there are much more emotional vegans than me. Speak for yourself. And even if torture mindkills some people, like I said, you can’t properly discuss some important topics without it, so if it spawns 1000-post threads that aren’t worth reading, too bad. (When I’ve used torture thought experiments so far, it hasn’t.)
Edit: Actually, I probably would stop talking about torture if every time I did it spawned a1000-post thread that wasn’t worth reading. But if that was what LessWrong was like, I would probably leave. Or if it was in every other respect the same (an implausible counterfactual), stay, but not enjoy it nearly as much.
Torture as an example seems like a bad idea in the same way that the Reagan/Quaker/Pacifist question is a bad example—it’s political, it draws one’s attention away from the actual argument.
That post is by GLaDOS, who is female. I doubt GLaDOS values women less than men, but it would be nice if you would actually make a case for your insult/accusation rather than just throwing it in without any discussion.
That post struck me as ignoring any advantages divorce might have (like getting out of bad marriages) for women.
It seems clear to me that the post was not about weighing the pros and cons of divorce in total (something which would take a lot more than a short post). The post makes a more abstract point about the way incentive changes can have large impacts even without people coordinating to deliberately change behavior. That seems like a very appropriate topic for Less Wrong.
I believe that the “problem” is that Lesswrong loves contrarians.
If a smart-sounding article espousing conservative opinions on social issues appears, most lesswrongers will disagree but be interested in reading anyway because it’s novel and there is a dearth of smart conservative opinions in the world, and the exciting chance to “actually change their mind” looms.
If a smart-sounding article espousing liberal opinions on social issues appears, most lesswrongers will agree but be disinterested in reading because they’ve heard it all before, and it’s preaching to the choir, and it’s political and mind-killing, etc.
This reversal of traditional attitudes to disagreement has its merits, but we’re seeing the downsides too. (one of the many reasons I advocate having separate feedback buttons for agreement, interest, and quality assessment)
Doesn’t this problem gradually fix itself? For example, at the beginning I was interested in Moldbug’s articles, but these days I just consider them boring. I have already heard the big picture; there is now nothing new, just reiterating what was already said; the lack of evidence or even clear explanations is very annoying, and I have already given up hope that it could be improved.
These days, if someone says something seemingly smart like “Cthulhu always swims left”, my first though is: give me a definition of what the hell do you even mean by this, then give me an evidence that it really happens, and if you don’t give any of it (which is my expectation based on previous experience) then just please shut up because you’re wasting my time.
Speaking for myself, the neo-reactionaries had their chance (which I consider to be a good thing—because I learned a few interesting things), and they wasted it.
I’m not sure, but my personal experience does mostly mirror yours. LW is not a stable group, though—there’s a cycle of users entering and leaving, and the total number of active people at any given time is quite small.
Well, since that post quotes liberally from a “manosphere” website, you’d be justified for assuming that it does take men’s welfare more seriously than women’s. But for what it’s worth, it’s mostly concerned with trying to predict men’s strategically reasonable response to a change in institutions, and determining the resulting equilibrium. Whether you value men’s and women’s welfare equally doesn’t much affect how bad the projected outcome is.
Why? A standard result in the trolley-problem literature is that folks deviate from utilitarian ethics in a way that’s suggestive of just such a moral injunction. People on LW are different, in that they tend to be highly committed to utilitarianism. But we already knew that—the way trolley problems are discussed here is just more evidence of this fact.
Does there need to be a case made for that? This seems like one of the earliest identified reasons for redistributing wealth. You had people and organizations sponsoring poor talented youth and this being considered virtuous since ancient Greece. And the reform of education and welfare in the 19th and 20th century often emphasized this example, thought they may not have always done much about it.
In Slovenia at least we have scholarships handed out to people who preform very well on aptitude tests, is this something that doesn’t happen as reliably in the US?
I know smart Americans who grew up very poor, and don’t seem to have received a lot of help.
Several states have merit-based scholarships (though these usually require performance in classes as well as aptitude tests, so there is a conscientiousness element as well as an intelligence element). I myself am going to university on a Bright Future scholarship. However, my impression is that federal need-based aid is a lot more common than state merit-based aid.
I don’t really like trolley problems either, but I don’t think they can be waved away. When programming a self-driving car’s decision algorithm for reacting when a car full of people skids in front of it while there is a single pedestrian on the sidewalk where it would have to swerve, you are essentially dealing with a real -world trolley problem.
Better to hit the other car rather than the pedestrian. The people in the car are protected by a lot of metal and will tend to suffer much less damage.
I think a lot of the focus on trolley problems is they’re sort of a platonic model of making hard decisions about tradeoffs, with the idea being that if you can convince people it’s right to make tradeoffs in the most obvious situation, they should consider the tradeoffs in much more complicated policy decisions also. EG people who propose Basic Income want people to be willing to trade “some of your money” for “greater happiness for many people”. This is also what a lot of Effective Altruism movement is based on, making GOOD tradeoffs rather than bad ones.
Your examples remind me of this thread on suicide, which is the most distressing thing I’ve read on less wrong. (Though it is not exactly an example of “low empathy.”)
This puzzles me. Would you elaborate on the reasons why you found it distressing?
I thought it could actually encourage another suicide. Suicidal feelings are ubiquitous, and the actual act is not that uncommon. When it’s committed it’s often a very disproportionate response to misfortune, or even to a mad self-hating inner monologue. I think the sober discussions that take place here about when misogyny, murder, or torture are warranted are mostly in bad taste, but I find it implausible that they will cause a harm worse than offense. Not so for a sober discussion of when suicide is warranted (even though it is not “offensive” in the same sense!).
Thank you. Well, no wonder I was puzzled, since this sentiment is thoroughly alien to me. When I hear of a suicide, my first thought and feeling are: “I’m glad his/her suffering is over”.
I’m surprised you call it “thoroughly alien,” I would have thought my position is thoroughly non-weird, even cliched. You really think nobody ever made a mistake by killing themselves? I won’t try to tell you otherwise, but you must know that that’s not a typical opinion.