There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups.
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)