Late edit: boy howdy do the vote ratios between these two comments (and the lack of subsequent acknowledgement or discussion) affirm rather than contradict my sense of LessWrong not being a great place for co-thinking. (This isn’t Cole’s fault.)
Did I miss something? Curious what the observation was.
(the current vote totals are like 38⁄28 and 12⁄7 for me, which seem unremarkable).
LessWrong is still a really rough place for me to try to do anything other than “present complete thoughts that I am thoroughly ready to defend.”
The fact that some other comments trickled in, and that the vote ratios stabilized, was definitely an improvement over the situation in the first 36h, but I think it’s not super cruxy? It was more of a crisp example (at the time) of the larger gestalt that demoralizes me, and it ceasing to be an example doesn’t mean the gestalt went away.
One of the things that hurts me when I try to be on LessWrong is something like ….…
A person will make a comment that has lots of skewed summaries and misinterpretations and just-plain-wrong claims about the claims it says its responding to (but it’s actually strawmanning)
Someone else will make an effortful rebuttal that corrects the misconceptions of the top-level comment and sometimes even answers the steel version of its complaint
The top comment will continue to accrue upvotes at a substantially faster clip than the lower comment, despite being meaningfully wrong, and it won’t ever get edited or fixed and it just keeps anchoring people on wrongthoughts forever
The lower comment gets largely ignored and often people don’t even engage with it at all
...all of which lives in my soul as a sort of despair that might be described as “yeah, so, if you want to give people a platform upon which to strawman you and then gain lots of local status and also infect the broader crowd very efficiently with their uncanny-valley version of your ideas such that it becomes even harder to actually talk about the thing you wanted to talk about … post it on LW!”
A whole other way to gesture at the same problem is something like, out in the real world I often find myself completely alone against the mob, fighting for either [truth] or [goodness] or both. And that’s fine. The real world makes no promises about me-not-finding-myself-alone.
But LessWrong, by its name and mission and sometimes by explicit promise or encouragement, is “supposed” to be the sort of place where I’m not completely alone against the mob. For one thing, the mob is supposed to be good instead of bad, and for another, there are supposed to be other people around who are also fighting the good fight, and not just me all by myself.
(There’s some hyperbole here, but.)
Instead, definitely not all the time but enough times that it matters, and enough times that it’s hurt me over and over again, and enough times that it produces strong hesitation, I’ve found myself completely alone with even high-status LWers (sometimes even senior mods!) just straightforwardly acting as the forces of darkness and madness and enacting mindkilledness and advocating for terrible and anti-epistemic things and it just hurts real bad. The feeling of betrayal and rug-pulled-out is much worse because this was supposed to be a place where people who care about collaborative truth-seeking could reliably find others to collaborate with.
I can find people to think productively with on LessWrong. But I can’t rely on it. More than 20% of the time, it goes badly, and “do an expansive and vulnerable thing in an environment where you will be stabbed for it one time out of five” just … kinda doesn’t work.
I have tried thinking about what’s the best way to avoid the groupthink that exists on most social media forums and also on lesswrong. I’m not yet convinced it can be solved at the level of software, the underlying social graph is fucked up.
There is no agreed upon definition of “good” on lesswrong. Theres people pro and against:
various AI alignment research agendas
various AI governance agendas
human genetic engineering
utilitarianism
longtermism
transhumanism of various types—such as human genetic engg, whole brain emulation, etc
Getting questions like this right or wrong is going to affect everything. Literal billions of people may well end up living or dying based on how these discussions go. (And yes I mean present day people, not hypothetical future people.)
I’m morally against a majority of the ideas proposed on lesswrong so I’m well aware it’s a hostile place for me.
I understand you are not here to discuss longtermism or whatever, but I’m just making you aware that’s what the “mob” also does at this place so ofcourse the same behaviour carries over to your replies.
Did I miss something? Curious what the observation was.
(the current vote totals are like 38⁄28 and 12⁄7 for me, which seem unremarkable).
For a while it was like 31⁄21 and 2⁄0 with no further commentary.
@Duncan Sabien (Inactive): given the updated totals @habryka mentioned does this increase your sense of LessWrong being a great place for co-thinking?
(Current totals are 42⁄39 and 16⁄11.)
LessWrong is still a really rough place for me to try to do anything other than “present complete thoughts that I am thoroughly ready to defend.”
The fact that some other comments trickled in, and that the vote ratios stabilized, was definitely an improvement over the situation in the first 36h, but I think it’s not super cruxy? It was more of a crisp example (at the time) of the larger gestalt that demoralizes me, and it ceasing to be an example doesn’t mean the gestalt went away.
One of the things that hurts me when I try to be on LessWrong is something like ….…
A person will make a comment that has lots of skewed summaries and misinterpretations and just-plain-wrong claims about the claims it says its responding to (but it’s actually strawmanning)
Someone else will make an effortful rebuttal that corrects the misconceptions of the top-level comment and sometimes even answers the steel version of its complaint
The top comment will continue to accrue upvotes at a substantially faster clip than the lower comment, despite being meaningfully wrong, and it won’t ever get edited or fixed and it just keeps anchoring people on wrongthoughts forever
The lower comment gets largely ignored and often people don’t even engage with it at all
...all of which lives in my soul as a sort of despair that might be described as “yeah, so, if you want to give people a platform upon which to strawman you and then gain lots of local status and also infect the broader crowd very efficiently with their uncanny-valley version of your ideas such that it becomes even harder to actually talk about the thing you wanted to talk about … post it on LW!”
A whole other way to gesture at the same problem is something like, out in the real world I often find myself completely alone against the mob, fighting for either [truth] or [goodness] or both. And that’s fine. The real world makes no promises about me-not-finding-myself-alone.
But LessWrong, by its name and mission and sometimes by explicit promise or encouragement, is “supposed” to be the sort of place where I’m not completely alone against the mob. For one thing, the mob is supposed to be good instead of bad, and for another, there are supposed to be other people around who are also fighting the good fight, and not just me all by myself.
(There’s some hyperbole here, but.)
Instead, definitely not all the time but enough times that it matters, and enough times that it’s hurt me over and over again, and enough times that it produces strong hesitation, I’ve found myself completely alone with even high-status LWers (sometimes even senior mods!) just straightforwardly acting as the forces of darkness and madness and enacting mindkilledness and advocating for terrible and anti-epistemic things and it just hurts real bad. The feeling of betrayal and rug-pulled-out is much worse because this was supposed to be a place where people who care about collaborative truth-seeking could reliably find others to collaborate with.
I can find people to think productively with on LessWrong. But I can’t rely on it. More than 20% of the time, it goes badly, and “do an expansive and vulnerable thing in an environment where you will be stabbed for it one time out of five” just … kinda doesn’t work.
I have tried thinking about what’s the best way to avoid the groupthink that exists on most social media forums and also on lesswrong. I’m not yet convinced it can be solved at the level of software, the underlying social graph is fucked up.
There is no agreed upon definition of “good” on lesswrong. Theres people pro and against:
various AI alignment research agendas
various AI governance agendas
human genetic engineering
utilitarianism
longtermism
transhumanism of various types—such as human genetic engg, whole brain emulation, etc
Getting questions like this right or wrong is going to affect everything. Literal billions of people may well end up living or dying based on how these discussions go. (And yes I mean present day people, not hypothetical future people.)
I’m morally against a majority of the ideas proposed on lesswrong so I’m well aware it’s a hostile place for me.
I understand you are not here to discuss longtermism or whatever, but I’m just making you aware that’s what the “mob” also does at this place so ofcourse the same behaviour carries over to your replies.