This feels sort of on the edge of “is useful outside of the current discussion.” It’d be fine to write up as it’s own post but my current feel is that it’s accomplishing most of it’s value as an answer to this question.
[this just my opinion of what feels vaguely right as a user, not intended to be normative]
I roughly endorse this description.
Could use more examples.
Sounds plausible. Would be interested in seeing more evidence that this works.
The thing we actually care about… Is it how everyone feels?
I happen to roughly agree with this but be warned that there are people who get off this train right about here.
Anyone know if there’s been updates to this in the past few years? I made a very brief (30 seconds) attempt to search for information on it but had trouble figuring out what question to ask google.
See this Open Question for some accumulated thoughts on how to explain social reality.
I haven’t yet fully checked whether I endorse the description there but it seemed good to link to Ruby’s post:
I’d be happy to get specific constructive suggestions about how to do that more clearly.
I don’t know that this suggestion is best – it’s a legitimately hard problem – but a policy I think would be pretty reasonable is:
When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: “hmm, I think it’d make more sense for you to read through this longer series and think carefully about it before continuing the discussion” rather than trying to engage with any specific points.
And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context.
(I think I would have had similar difficulty responding to Evan’s comment as what you describe here)
FYI I’m leaning towards giving him a temp ban on LW but haven’t checked in with other LW team members since we’re all recovering from EA Global.
(I also think it’s just sort of okay for there to be a mutual understanding and clarify that some classes of feedback need to be treated as indistinguishable from attacks, which means they need to be somewhat socially punished to disincentive coalition politics, but that doesn’t mean they don’t also get listened to)
One thing re: missing moods is that while I think there’s room for improvement on the “be able to make criticisms without them being attacks” front, I think solving this looks quite different from the way you (and Duncan of 1.5 years ago) were trying to solve it.
There are fundamental limitations of a public forum, and of sprawling, heated discussions in particular. I think it will always require costly demonstrations of good faith in order to do make strong criticisms in public without being perceived as attacking. I think if you attempt to do this, you are just laying down norms that enable and incentive politicians, resulting in less clarity, not more.
But there are two options that both seem relatively straightforward to me:
1. Make criticisms, and employ a lot of costly signaling that you are arguing in good faith.
2. Have a norm wherein people discuss criticism in private, and then afterwards publish a public document that they both endorse. (This may in some cases require counterfactual willingness to write critiques that are attacks)
I generally prefer the latter once a conversation has begun to branch and get heated. Once a conversation has become multithreaded and involve serious disagreements, maintaining good faith becomes exponentially more expensive.
I found it somewhat hard so far to treat this different than a LW comment thread. It’s possible a PM setup would have a different effect. This doesn’t lead me to what to change anything about the experiment, just noting it.
strongly believe it’s wrong
And realizing I think I basically knew that “somewhat skeptical” was not an accurate way to describe your beliefs, and I think the algorithm that led me to write it that way was running through some sort of modesty or conflict-mediation filter that I don’t endorse. Mostly noting for my own reference
Agreed that all three types of criticism are quite important – And yes I intended for “criticizing an error entangled with someone’s identity” to be quite different from what I mean by “call to conflict” here. (“Call to Conflict” maybe fits more into the second bullet point here, and the way I was intending to use it was particularly extreme instances)
I have more thoughts but they’re taking awhile to get in order.
I’m interested in a medium-fleshed-out version of this comment that holds my hand more than the current one does. (Not sure whether I’d want the full fledged post version yet)
(In general, happy to see more people using shortform feeds)
((also, you probably didn’t mean to call it a short-term feed))
I think I agree with most of the basic concepts here, and disagreements are mostly of the form “given current resources, what goals are practical to set and achieve?”
I think both having more active moderation of the type Duncan describes would be good, as would an epistemic court, and the only argument I have against them is that they’re expensive. Epistemic Court seems potentially more viable because it doesn’t necessarily need to be used all the time – it’s expensive but if only used on the most important cases it might be affordable.
The sorts of systems that I think LW is exploring right now are ones that “solve problems with technology, rather than cognitive effort, when possible.” Competent people are busy and the world is big, so it makes more sense to do things like nudges that require minimal effort form moderators to maintain. (the parts of Duncan’s suggestions that we’ve come closest to implementing are things that make it easy for moderators to at least skim each new post, and take a few quick actions)
This does mean there are limits on what sort of place LessWrong can be.
A lot of why I’ve been skeptical of the idea of a generic forum over the last few years, is that it seems to me like people who are trying to figure something specific out—who have a perspective which in some concrete interested way wants to be made more correct—are going to have a huge advantage at filtering constructive from unconstructive comments, vs people who are trying to comply with the rules of good thinking
This does sound like a good description of the problem.
I agree that people have a justified expectation that criticism actually is meant as an attack, but that just means we have to solve a hard problem. If we bounce off it instead, then this isn’t really a rationality site, it’s just a weird social club with shared rationality-related applause lights.
I definitely think of solving this as part of my longterm goal. But a major disagreement is that “if you can’t solve this, you’re just left with a weird social club.” (This was also a major disagreement of mine with Duncan).
I think there are lots of things you can achieve that are massive improvements over the status quo, that don’t require solving this problem. There are probably around 20 major characteristics I wish each LW user had (such as “be able to think in probabilites” and “be able to generate hypotheses for confusing phenomena”), and most of them can be improved with “regular learning and practice”, and nudges, rather than overcoming weird adversarial anti-inductive dynamics.
LessWrong isn’t as good as many small, private, heavily filtered spaces, but a) it’s present form still seems like a significant improvement as far as public forums go over most alternatives in the same reference class, and b) I think there’s a bunch of room for further improvement.
A major example the team is exploring is the Open Questions feature. An important aspect of it is that sort of forces people to focus on the object level, and on actually figuring things out. It’s harder to have a demon thread when the frame is “help answer this question.” And meanwhile it can start to direct people’s default behavior from “sort of just hang out on the internet” to “actually do intellectual labor that solves a problem.”
EA is “effective altruism”. (And yeah this thread basically assumes a lot of context)
Nod. Although I’ll note “near future” can include “years or in some cases decades.”
Meta Thread (for making observations or suggested changes to the format)