Probably nobody, but then again, your sourdough is probably not impinging on anyone’s interests, either. Baking a loaf of sourdough doesn’t really come with opportunities to exploit other people for your own gain, etc. So of course there’s not going to be much controversy.
But whenever there is controversy, usually due to the existence of genuinely competing interests, then motives for sabotage become plausible, whereupon it immediately becomes tempting to declare that those who think that you ought to be doing things differently are just trying to sabotage you.
Finding people who are trustworthy, good at handling it well, and willing to teach you is wonderful. I’ve been trying to learn the most from sources well outside the rationalist community, but I think there is good advice to be had. Just, not uncritically trusted?
Also, some people seem to think this class of problem should be easy. For those people I want to make the point that it is (at least sometimes) an adversarial situation.
I agree, it certainly is an adversarial situation—and not only sometimes, but most of the time. And I agree that you should not uncritically trust advice that you hear from any sources. In fact, you shouldn’t even trust advice that you hear from yourself.
Consider your bank example again. You might think: “hmm, that guy has an odd amount of knowledge of, and/or interest in, internal bank practices and security and so on; suspicious!”. Then you learn that he works at a bank himself, so it turns out that his knowledge and interest aren’t suspicious after all—great, cancel that red flag.
But of course “suspect” is exactly the wrong word here. If you’re having to suspect people, you’ve already lost.
Consider computer security. I ask about the security software that your company is using to protect your customers’ data—could I see the code? Which cryptographic algorithms do you use? You’re suspicious; what do I need this information for? Who should be allowed to have this sort of knowledge?
And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
The way to ensure that you don’t accidentally give the wrong person insider access to your system is to construct a system such that nobody can exploit it by having insider access.
(Another way of putting this is to say that selective methods absolutely do not suffice for ensuring the trustworthiness and integrity of social systems.)
The same is true for the problem of “from whom to take advice on conflict resolution”. You should not have to figure out the motives of the advice-giver or to decide whether to trust their advice. Your procedure for evaluating advice should work perfectly even if the advice comes from your bitter enemy who wishes nothing more than to see you fail. And then you should apply that same procedure to what you already believe and the practices you are already employing—take the advice that you would give to someone, and ask what you would think of it if it had come to you from someone of whom you suspected that they might be your worst and most cunning enemy. Is your evaluation procedure robust enough to handle that?
If it is not, then any time spent thinking about whether the source of the advice is trustworthy is pointless, because you can’t very well trust someone else more than you trust yourself, and you evaluation procedure is too weak to guard against your own biases. And if it is robust enough, then once again it is pointless to wonder whom you should trust, because you don’t have to trust anyone—only to verify.
And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
This makes sense for computer security, but for biosecurity it doesn’t work, because it’s a lot harder to ship a patch to people’s bodies than to people’s computers. The biggest reason there has never been a terrorist attack with a pandemic-capable virus is that, with few exceptions (such as smallpox), we don’t know what they are.
A: My understanding is that the U.S. Government is currently funding research programs to identify new potential pandemic-level viruses.
K: Unfortunately, yes. The U.S. government thinks we need to learn about these viruses so we can build defenses — in this case vaccines and antivirals. Of course, vaccines are what have gotten us out of COVID, more or less. Certainly they’ve saved a ton of lives. And antivirals like Paxlovid are helping. So people naturally think, that’s that’s the answer, right?
But it’s not. In the first place, learning whether a virus is pandemic capable does not help you develop a vaccine against it in any way, nor does it help create antivirals. Second, knowing about a pandemic-capable virus in advance doesn’t speed up research in vaccines or antivirals. You can’t run a clinical trial in humans on a new virus of unknown lethality, especially one which has never infected a human — and might never. And given that we can design vaccines in one day, you don’t save much time in knowing what the threat is in advance.
The problem is there are around three to four pandemics per century that cause a million or more deaths, just judging from the last ones — 1889, 1918, 1957, 1968 and 2019. There’s probably at least 100 times as many pandemic-capable viruses in nature — it’s just that most of them never get exposed to humans, and if they do, they don’t infect another human soon enough to spread. They just get extinguished.
What that means is if you identify one pandemic-capable virus, even if you can perfectly prevent it from spilling over and there’s zero risk of accidents, you’ve prevented 1⁄100 of a pandemic. But if there’s a 1% chance per year that someone will assemble that virus and release it, then you’ve caused one full pandemic in expectation. In other words, you’ve just killed more than 100 times as many people as you saved.
you can’t very well trust someone else more than you trust yourself
In certain domains, I absolutely can and will do this, because “someone else” has knowledge and experience that I don’t and could not conveniently acquire. For example, if I hire lawyers for my business’s legal department, I’m probably not going to second-guess them about whether a given contract is unfair or contains hidden gotchas, and I’m usually going to trust a doctor’s diagnosis more than I trust my own. (The shortfalls of “Doctor Google” are well-known, so although I often do “do my own research” I only trust it so much.)
In certain domains, I absolutely can and will do this, because “someone else” has knowledge and experience that I don’t and could not conveniently acquire.
Honestly? By going to the list of doctors that my health insurance will pay for, or some other method of semi-randomly choosing among licensed professionals that I hope doesn’t anti-correlate with the quality of their advice. There are probably better ways, but I don’t know what they are offhand. ::shrug::
If you were accused of a crime and intended to plead not guilty, how would you choose a defense attorney, assuming you weren’t going to use a public defender?
So you trust yourself to decide how to select a doctor; you trust your decision procedure, which you have chosen.
If you were accused of a crime and intended to plead not guilty, how would you choose a defense attorney, assuming you weren’t going to use a public defender?
I’d ask trusted friends for recommendations, because I trust myself to know whom to ask, and how to evaluate their advice.
I would be delighted to have the social equivalent of a zero trust conflict resolution system that everyone who interacted with it could understand and where the system could also maintain confidentiality as needed. I’m in favour of the incremental steps towards that I can make. In the abstract, I agree the procedure for evaluating advice should work even if it comes from bitter enemies. I do not think my personal evaluation procedure is currently robust enough to handle that, though tsuyoku naritai, someday maybe it will be.
The main context I encounter these problems is in helping local ACX meetup organizers. Some of them first found the blog a few months ago, ran a decent ACX Everywhere that blossomed into a regular meetup group, and then a conflict happened. I want good advice or structures to hand to them, and expecting them to be able to evaluate my advice to that standard seems unreasonable. It’s likely that at least one and possibly all of the local belligerents will have suggestions, and those suggestions will conveniently favour the advice-giver.
One way to read this essay, which I would endorse as useful, is as one useful answer to the question “why do all the people in this conflict I find myself in all have such different ideas of the procedure we should use to resolve it?”
I’m in favour of the incremental steps towards that I can make. In the abstract, I agree the procedure for evaluating advice should work even if it comes from bitter enemies. I do not think my personal evaluation procedure is currently robust enough to handle that, though tsuyoku naritai, someday maybe it will be.
Yes, but in the absence of this, every other approach is doomed to failure. And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.
I want good advice or structures to hand to them, and expecting them to be able to evaluate my advice to that standard seems unreasonable.
Respectfully, I disagree. I think that this is the only standard that yields workable results. If it cannot be satisfied even approximately, even in large part (if not in whole), then better not to begin.
I’m trying to come up with people that I think actually reach the standard you’re describing. I think I know maybe ten, of which two have any time or interest in handling meetup conflicts.
I do agree there’s some big failures that can happen when the people with authority to solve the problem take it very seriously, put a lot of effort into addressing it, and screw up. I don’t agree that relationship is inversely proportional; if I imagine say, a 0 effort organizer who does nothing vs a 0.1 effort organizer who only moderates to say “shut up or leave” to a attendees who keep yelling their political opponents should be killed, this seems like an improvement. There’s a lot of low hanging fruit here.
It’s possible “even approximately, even in large part” covers a much greater range than I’m interpreting it as and your standard is lower than it sounds. If not, I think we’re at an impasse of a disagreement. I think that if nobody does any conflict resolution at all unless they are that good of an evaluator, all but a vanishingly small number of spaces will become much worse. We’re talking on LessWrong, I do not think the moderators here are at that level, and yet the space is much improved relative to other places. Seems like 4chan decided better not to begin, and I like LessWrong more.
I do agree there’s some big failures that can happen when the people with authority to solve the problem take it very seriously, put a lot of effort into addressing it, and screw up. I don’t agree that relationship is inversely proportional; if I imagine say, a 0 effort organizer who does nothing vs a 0.1 effort organizer who only moderates to say “shut up or leave” to a attendees who keep yelling their political opponents should be killed, this seems like an improvement. There’s a lot of low hanging fruit here.
Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)
I think that if nobody does any conflict resolution at all unless they are that good of an evaluator, all but a vanishingly small number of spaces will become much worse. We’re talking on LessWrong, I do not think the moderators here are at that level, and yet the space is much improved relative to other places. Seems like 4chan decided better not to begin, and I like LessWrong more.
“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.
I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places. (And many of what I perceive to be mistakes in moderation policy are traceable to the gap between their approach, and the sort of approach I am describing here.) At the very least, it’s clear that the LW mods have considerable experience with having to evaluate advice that does, in fact, come from their (our) enemies.
Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)
“And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.”
Hrm. Yes, I seem to have read it differently, apologies. I think I flipped the sign on “the severity of the failure” where I interpreted it as the failure being bigger the more seriously people with power and authority took the problem.
“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.
I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places.
Yeah. I prefer having LessWrong over having nothing in its place. I even prefer having LessWrong over having nothing in the place of everything shaped like an internet forum.
Do the LW mods pass your threshold for good enough it’s worth beginning? I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Do the LW mods pass your threshold for good enough it’s worth beginning?
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
A: You are trying to do X, which requires Y. But you don’t have Y.
B: Well, sure, I mean… not exactly, no. I mean, mostly, sort of… (a bunch more waffling, eventually ending with…) Yeah, we don’t have Y.
A: So you can’t do X.
B: Well, we have to, I’m afraid…
A: That’s too bad, because you can’t. As we’ve established.
B: Well, what are we going to do, just not do X?
A: Right.
B: Unacceptable! We have to!
A: You are not going to successfully do X. That will either be because you stop trying, or because you try but fail.
B: Not doing X is not an option!
B tries to do X
B fails to do X, due to the lack of Y
A: Yep.
B: Well, we have to do our best!
A: Your best would be “stop trying to do X”.
B ignores A, continues trying to do X and predictably failing, wasting resources and causing harm indefinitely (or until external circumstances terminate the endeavor, possibly causing even more harm in the process)
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
Now, while reading the above, you might have thought: “obviously B should be trying to acquire Y, in order to successfully do X!”. I agree. But that does not look like “do X anyway, and maybe we’ll acquire Y in the process”. (Y, in this case, is “the skills that we’ve been discussing in this comment thread”.) It has to be a goal-directed effort, with the explicit purpose of acquiring those skills. It can be done while also starting to actually run meetups, but only with an explicit awareness and serious appreciation of the problem, and with serious effort being continuously put in to mitigate the problem. And the advice for prospective meetup organizers should tackle this head-on, not seek to circumvent it. And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
You might say: “this is a high bar to clear, and high standards to meet”. Yes. But the standards are not set by me, they are set by reality; and the evidence of their necessity has been haunting us for basically the entirety of the “rationalist community”’s existence, and continues to do so.
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn’t wind up ever meeting it. There’s a bit of, if you can’t bench press 100lbs now, try benching 20lbs now and you’ll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won’t be able to then either.
Ideally there’s a way to speed that up and among the ideas I have for that is writing down some lessons I’ve learned in big highlighter. I’m pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
...
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn’t that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I’m trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I’ll call it an incremental victory.
And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don’t think I’m the only one (as usual I think CEA is ahead of me) but I’m trying. I didn’t write much about this for the first year or two because I wasn’t sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I’ve got toeholds, at least enough to hopefully warn away some fools mates.
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don’t count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we’re asking per-event, Dirk’s ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I’d believe 5,000~10,000.
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups.
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
Is it better to have a rationality meetup with a baseline level of unskilled ad-hoc conflict resolution, or no rationality meetup? (You can ask the same question about any social event potentially open to the public.) I imagine the answer would depend on the number of people expected to attend—informal methods work better for groups of 10 than for groups of 100.
Probably nobody, but then again, your sourdough is probably not impinging on anyone’s interests, either. Baking a loaf of sourdough doesn’t really come with opportunities to exploit other people for your own gain, etc. So of course there’s not going to be much controversy.
But whenever there is controversy, usually due to the existence of genuinely competing interests, then motives for sabotage become plausible, whereupon it immediately becomes tempting to declare that those who think that you ought to be doing things differently are just trying to sabotage you.
I agree, it certainly is an adversarial situation—and not only sometimes, but most of the time. And I agree that you should not uncritically trust advice that you hear from any sources. In fact, you shouldn’t even trust advice that you hear from yourself.
Consider your bank example again. You might think: “hmm, that guy has an odd amount of knowledge of, and/or interest in, internal bank practices and security and so on; suspicious!”. Then you learn that he works at a bank himself, so it turns out that his knowledge and interest aren’t suspicious after all—great, cancel that red flag.
No! Wrong! Don’t cancel it! Put it back! Raise two red flags! (“An analysis by the American Bankers Association concluded that 65% to 70% of fraud dollar losses in banks are associated with insider fraud.”) Suspect everyone, especially the people you’ve already decided to trust!
But of course “suspect” is exactly the wrong word here. If you’re having to suspect people, you’ve already lost.
Consider computer security. I ask about the security software that your company is using to protect your customers’ data—could I see the code? Which cryptographic algorithms do you use? You’re suspicious; what do I need this information for? Who should be allowed to have this sort of knowledge?
And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
The way to ensure that you don’t accidentally give the wrong person insider access to your system is to construct a system such that nobody can exploit it by having insider access.
(Another way of putting this is to say that selective methods absolutely do not suffice for ensuring the trustworthiness and integrity of social systems.)
The same is true for the problem of “from whom to take advice on conflict resolution”. You should not have to figure out the motives of the advice-giver or to decide whether to trust their advice. Your procedure for evaluating advice should work perfectly even if the advice comes from your bitter enemy who wishes nothing more than to see you fail. And then you should apply that same procedure to what you already believe and the practices you are already employing—take the advice that you would give to someone, and ask what you would think of it if it had come to you from someone of whom you suspected that they might be your worst and most cunning enemy. Is your evaluation procedure robust enough to handle that?
If it is not, then any time spent thinking about whether the source of the advice is trustworthy is pointless, because you can’t very well trust someone else more than you trust yourself, and you evaluation procedure is too weak to guard against your own biases. And if it is robust enough, then once again it is pointless to wonder whom you should trust, because you don’t have to trust anyone—only to verify.
This makes sense for computer security, but for biosecurity it doesn’t work, because it’s a lot harder to ship a patch to people’s bodies than to people’s computers. The biggest reason there has never been a terrorist attack with a pandemic-capable virus is that, with few exceptions (such as smallpox), we don’t know what they are.
See also:
In certain domains, I absolutely can and will do this, because “someone else” has knowledge and experience that I don’t and could not conveniently acquire. For example, if I hire lawyers for my business’s legal department, I’m probably not going to second-guess them about whether a given contract is unfair or contains hidden gotchas, and I’m usually going to trust a doctor’s diagnosis more than I trust my own. (The shortfalls of “Doctor Google” are well-known, so although I often do “do my own research” I only trust it so much.)
And how do you choose who the “someone else” is?
Honestly? By going to the list of doctors that my health insurance will pay for, or some other method of semi-randomly choosing among licensed professionals that I hope doesn’t anti-correlate with the quality of their advice. There are probably better ways, but I don’t know what they are offhand. ::shrug::
If you were accused of a crime and intended to plead not guilty, how would you choose a defense attorney, assuming you weren’t going to use a public defender?
So you trust yourself to decide how to select a doctor; you trust your decision procedure, which you have chosen.
I’d ask trusted friends for recommendations, because I trust myself to know whom to ask, and how to evaluate their advice.
I would be delighted to have the social equivalent of a zero trust conflict resolution system that everyone who interacted with it could understand and where the system could also maintain confidentiality as needed. I’m in favour of the incremental steps towards that I can make. In the abstract, I agree the procedure for evaluating advice should work even if it comes from bitter enemies. I do not think my personal evaluation procedure is currently robust enough to handle that, though tsuyoku naritai, someday maybe it will be.
The main context I encounter these problems is in helping local ACX meetup organizers. Some of them first found the blog a few months ago, ran a decent ACX Everywhere that blossomed into a regular meetup group, and then a conflict happened. I want good advice or structures to hand to them, and expecting them to be able to evaluate my advice to that standard seems unreasonable. It’s likely that at least one and possibly all of the local belligerents will have suggestions, and those suggestions will conveniently favour the advice-giver.
One way to read this essay, which I would endorse as useful, is as one useful answer to the question “why do all the people in this conflict I find myself in all have such different ideas of the procedure we should use to resolve it?”
Yes, but in the absence of this, every other approach is doomed to failure. And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.
Respectfully, I disagree. I think that this is the only standard that yields workable results. If it cannot be satisfied even approximately, even in large part (if not in whole), then better not to begin.
I’m trying to come up with people that I think actually reach the standard you’re describing. I think I know maybe ten, of which two have any time or interest in handling meetup conflicts.
I do agree there’s some big failures that can happen when the people with authority to solve the problem take it very seriously, put a lot of effort into addressing it, and screw up. I don’t agree that relationship is inversely proportional; if I imagine say, a 0 effort organizer who does nothing vs a 0.1 effort organizer who only moderates to say “shut up or leave” to a attendees who keep yelling their political opponents should be killed, this seems like an improvement. There’s a lot of low hanging fruit here.
It’s possible “even approximately, even in large part” covers a much greater range than I’m interpreting it as and your standard is lower than it sounds. If not, I think we’re at an impasse of a disagreement. I think that if nobody does any conflict resolution at all unless they are that good of an evaluator, all but a vanishingly small number of spaces will become much worse. We’re talking on LessWrong, I do not think the moderators here are at that level, and yet the space is much improved relative to other places. Seems like 4chan decided better not to begin, and I like LessWrong more.
Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)
“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.
I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places. (And many of what I perceive to be mistakes in moderation policy are traceable to the gap between their approach, and the sort of approach I am describing here.) At the very least, it’s clear that the LW mods have considerable experience with having to evaluate advice that does, in fact, come from their (our) enemies.
“And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.”
Hrm. Yes, I seem to have read it differently, apologies. I think I flipped the sign on “the severity of the failure” where I interpreted it as the failure being bigger the more seriously people with power and authority took the problem.
Yeah. I prefer having LessWrong over having nothing in its place. I even prefer having LessWrong over having nothing in the place of everything shaped like an internet forum.
Do the LW mods pass your threshold for good enough it’s worth beginning? I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
A: You are trying to do X, which requires Y. But you don’t have Y.
B: Well, sure, I mean… not exactly, no. I mean, mostly, sort of… (a bunch more waffling, eventually ending with…) Yeah, we don’t have Y.
A: So you can’t do X.
B: Well, we have to, I’m afraid…
A: That’s too bad, because you can’t. As we’ve established.
B: Well, what are we going to do, just not do X?
A: Right.
B: Unacceptable! We have to!
A: You are not going to successfully do X. That will either be because you stop trying, or because you try but fail.
B: Not doing X is not an option!
B tries to do X
B fails to do X, due to the lack of Y
A: Yep.
B: Well, we have to do our best!
A: Your best would be “stop trying to do X”.
B ignores A, continues trying to do X and predictably failing, wasting resources and causing harm indefinitely (or until external circumstances terminate the endeavor, possibly causing even more harm in the process)
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
Now, while reading the above, you might have thought: “obviously B should be trying to acquire Y, in order to successfully do X!”. I agree. But that does not look like “do X anyway, and maybe we’ll acquire Y in the process”. (Y, in this case, is “the skills that we’ve been discussing in this comment thread”.) It has to be a goal-directed effort, with the explicit purpose of acquiring those skills. It can be done while also starting to actually run meetups, but only with an explicit awareness and serious appreciation of the problem, and with serious effort being continuously put in to mitigate the problem. And the advice for prospective meetup organizers should tackle this head-on, not seek to circumvent it. And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
You might say: “this is a high bar to clear, and high standards to meet”. Yes. But the standards are not set by me, they are set by reality; and the evidence of their necessity has been haunting us for basically the entirety of the “rationalist community”’s existence, and continues to do so.
Approximately, anyway. There’s a bunch of mods, they’re not all the same, etc.
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn’t wind up ever meeting it. There’s a bit of, if you can’t bench press 100lbs now, try benching 20lbs now and you’ll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won’t be able to then either.
Ideally there’s a way to speed that up and among the ideas I have for that is writing down some lessons I’ve learned in big highlighter. I’m pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn’t that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I’m trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I’ll call it an incremental victory.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don’t think I’m the only one (as usual I think CEA is ahead of me) but I’m trying. I didn’t write much about this for the first year or two because I wasn’t sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I’ve got toeholds, at least enough to hopefully warn away some fools mates.
fwiw, these are what I’d say a 2std failure case of a rationalist meetup looks like
https://www.wired.com/story/delirious-violent-impossible-true-story-zizians/
https://variety.com/2025/tv/news/julia-garner-caroline-ellison-ftx-series-netflix-1236385385/
https://www.wired.com/story/book-excerpt-the-optimist-open-ai-sam-altman/
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don’t count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we’re asking per-event, Dirk’s ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I’d believe 5,000~10,000.
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
Is it better to have a rationality meetup with a baseline level of unskilled ad-hoc conflict resolution, or no rationality meetup? (You can ask the same question about any social event potentially open to the public.) I imagine the answer would depend on the number of people expected to attend—informal methods work better for groups of 10 than for groups of 100.
It is better to have no rationality meetup.