Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)
“And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.”
Hrm. Yes, I seem to have read it differently, apologies. I think I flipped the sign on “the severity of the failure” where I interpreted it as the failure being bigger the more seriously people with power and authority took the problem.
“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.
I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places.
Yeah. I prefer having LessWrong over having nothing in its place. I even prefer having LessWrong over having nothing in the place of everything shaped like an internet forum.
Do the LW mods pass your threshold for good enough it’s worth beginning? I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Do the LW mods pass your threshold for good enough it’s worth beginning?
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
A: You are trying to do X, which requires Y. But you don’t have Y.
B: Well, sure, I mean… not exactly, no. I mean, mostly, sort of… (a bunch more waffling, eventually ending with…) Yeah, we don’t have Y.
A: So you can’t do X.
B: Well, we have to, I’m afraid…
A: That’s too bad, because you can’t. As we’ve established.
B: Well, what are we going to do, just not do X?
A: Right.
B: Unacceptable! We have to!
A: You are not going to successfully do X. That will either be because you stop trying, or because you try but fail.
B: Not doing X is not an option!
B tries to do X
B fails to do X, due to the lack of Y
A: Yep.
B: Well, we have to do our best!
A: Your best would be “stop trying to do X”.
B ignores A, continues trying to do X and predictably failing, wasting resources and causing harm indefinitely (or until external circumstances terminate the endeavor, possibly causing even more harm in the process)
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
Now, while reading the above, you might have thought: “obviously B should be trying to acquire Y, in order to successfully do X!”. I agree. But that does not look like “do X anyway, and maybe we’ll acquire Y in the process”. (Y, in this case, is “the skills that we’ve been discussing in this comment thread”.) It has to be a goal-directed effort, with the explicit purpose of acquiring those skills. It can be done while also starting to actually run meetups, but only with an explicit awareness and serious appreciation of the problem, and with serious effort being continuously put in to mitigate the problem. And the advice for prospective meetup organizers should tackle this head-on, not seek to circumvent it. And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
You might say: “this is a high bar to clear, and high standards to meet”. Yes. But the standards are not set by me, they are set by reality; and the evidence of their necessity has been haunting us for basically the entirety of the “rationalist community”’s existence, and continues to do so.
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn’t wind up ever meeting it. There’s a bit of, if you can’t bench press 100lbs now, try benching 20lbs now and you’ll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won’t be able to then either.
Ideally there’s a way to speed that up and among the ideas I have for that is writing down some lessons I’ve learned in big highlighter. I’m pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
...
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn’t that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I’m trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I’ll call it an incremental victory.
And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don’t think I’m the only one (as usual I think CEA is ahead of me) but I’m trying. I didn’t write much about this for the first year or two because I wasn’t sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I’ve got toeholds, at least enough to hopefully warn away some fools mates.
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don’t count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we’re asking per-event, Dirk’s ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I’d believe 5,000~10,000.
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups.
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
Is it better to have a rationality meetup with a baseline level of unskilled ad-hoc conflict resolution, or no rationality meetup? (You can ask the same question about any social event potentially open to the public.) I imagine the answer would depend on the number of people expected to attend—informal methods work better for groups of 10 than for groups of 100.
“And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.”
Hrm. Yes, I seem to have read it differently, apologies. I think I flipped the sign on “the severity of the failure” where I interpreted it as the failure being bigger the more seriously people with power and authority took the problem.
Yeah. I prefer having LessWrong over having nothing in its place. I even prefer having LessWrong over having nothing in the place of everything shaped like an internet forum.
Do the LW mods pass your threshold for good enough it’s worth beginning? I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
A: You are trying to do X, which requires Y. But you don’t have Y.
B: Well, sure, I mean… not exactly, no. I mean, mostly, sort of… (a bunch more waffling, eventually ending with…) Yeah, we don’t have Y.
A: So you can’t do X.
B: Well, we have to, I’m afraid…
A: That’s too bad, because you can’t. As we’ve established.
B: Well, what are we going to do, just not do X?
A: Right.
B: Unacceptable! We have to!
A: You are not going to successfully do X. That will either be because you stop trying, or because you try but fail.
B: Not doing X is not an option!
B tries to do X
B fails to do X, due to the lack of Y
A: Yep.
B: Well, we have to do our best!
A: Your best would be “stop trying to do X”.
B ignores A, continues trying to do X and predictably failing, wasting resources and causing harm indefinitely (or until external circumstances terminate the endeavor, possibly causing even more harm in the process)
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
Now, while reading the above, you might have thought: “obviously B should be trying to acquire Y, in order to successfully do X!”. I agree. But that does not look like “do X anyway, and maybe we’ll acquire Y in the process”. (Y, in this case, is “the skills that we’ve been discussing in this comment thread”.) It has to be a goal-directed effort, with the explicit purpose of acquiring those skills. It can be done while also starting to actually run meetups, but only with an explicit awareness and serious appreciation of the problem, and with serious effort being continuously put in to mitigate the problem. And the advice for prospective meetup organizers should tackle this head-on, not seek to circumvent it. And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
You might say: “this is a high bar to clear, and high standards to meet”. Yes. But the standards are not set by me, they are set by reality; and the evidence of their necessity has been haunting us for basically the entirety of the “rationalist community”’s existence, and continues to do so.
Approximately, anyway. There’s a bunch of mods, they’re not all the same, etc.
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn’t wind up ever meeting it. There’s a bit of, if you can’t bench press 100lbs now, try benching 20lbs now and you’ll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won’t be able to then either.
Ideally there’s a way to speed that up and among the ideas I have for that is writing down some lessons I’ve learned in big highlighter. I’m pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn’t that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I’m trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I’ll call it an incremental victory.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don’t think I’m the only one (as usual I think CEA is ahead of me) but I’m trying. I didn’t write much about this for the first year or two because I wasn’t sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I’ve got toeholds, at least enough to hopefully warn away some fools mates.
fwiw, these are what I’d say a 2std failure case of a rationalist meetup looks like
https://www.wired.com/story/delirious-violent-impossible-true-story-zizians/
https://variety.com/2025/tv/news/julia-garner-caroline-ellison-ftx-series-netflix-1236385385/
https://www.wired.com/story/book-excerpt-the-optimist-open-ai-sam-altman/
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don’t count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we’re asking per-event, Dirk’s ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I’d believe 5,000~10,000.
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
Is it better to have a rationality meetup with a baseline level of unskilled ad-hoc conflict resolution, or no rationality meetup? (You can ask the same question about any social event potentially open to the public.) I imagine the answer would depend on the number of people expected to attend—informal methods work better for groups of 10 than for groups of 100.
It is better to have no rationality meetup.