Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn’t wind up ever meeting it. There’s a bit of, if you can’t bench press 100lbs now, try benching 20lbs now and you’ll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won’t be able to then either.
Ideally there’s a way to speed that up and among the ideas I have for that is writing down some lessons I’ve learned in big highlighter. I’m pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
...
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn’t that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I’m trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I’ll call it an incremental victory.
And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don’t think I’m the only one (as usual I think CEA is ahead of me) but I’m trying. I didn’t write much about this for the first year or two because I wasn’t sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I’ve got toeholds, at least enough to hopefully warn away some fools mates.
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don’t count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we’re asking per-event, Dirk’s ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I’d believe 5,000~10,000.
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups.
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn’t wind up ever meeting it. There’s a bit of, if you can’t bench press 100lbs now, try benching 20lbs now and you’ll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won’t be able to then either.
Ideally there’s a way to speed that up and among the ideas I have for that is writing down some lessons I’ve learned in big highlighter. I’m pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn’t that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I’m trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I’ll call it an incremental victory.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don’t think I’m the only one (as usual I think CEA is ahead of me) but I’m trying. I didn’t write much about this for the first year or two because I wasn’t sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I’ve got toeholds, at least enough to hopefully warn away some fools mates.
fwiw, these are what I’d say a 2std failure case of a rationalist meetup looks like
https://www.wired.com/story/delirious-violent-impossible-true-story-zizians/
https://variety.com/2025/tv/news/julia-garner-caroline-ellison-ftx-series-netflix-1236385385/
https://www.wired.com/story/book-excerpt-the-optimist-open-ai-sam-altman/
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don’t count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we’re asking per-event, Dirk’s ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I’d believe 5,000~10,000.
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)