Interest In Conflict Is Instrumentally Convergent
Why is conflict resolution hard?
I talk to a lot of event organizers and community managers. Handling conflict is consistently one of the things they find the most stressful, difficult, or time consuming. Why is that?
Or, to ask a related question: Why is the ACX Meetups Czar, tasked with collecting and dispersing best practices for meetups, spending so much time writing about social conflict? This essay is not the whole answer, but it is one part of why this is a hard problem.
Short answer: Because interest in conflict resolution is instrumentally convergent. Both helpful and unhelpful people have reason to express strong opinions on how conflict is handled.
Please take as a given that there exists (as a minimum) one bad actor with an interest in showing up to (as a minimum) one in-person group.
I.
See, a funny thing about risk teams: the list of red flags you have? On it, one of them is “Customer evinces an odd level of interest or knowledge in the operations of internal bank policies.”
Imagine you are a bank security guard.
You are standing in your bank, sipping your coffee and watching people come in and out. It’s a good gig, bank security, they have dental insurance and the coffee machine makes a surprisingly good cup. As you’re standing there, someone in sunglasses and a baseball cap waiting in line strikes up a conversation with you.
“Nice day, isn’t it?” they say.
“Sure is,” you reply.
“Nice bank too,” they say, “I love the architecture on these old buildings. Good floors. Do you know if it has a basement level, maybe with a vault?”
“Yeah, concrete flooring.”
“Nice, nice,” this stranger (who you start thinking of as Sunglasses) says, “You know, I’m also into videography. What kind of cameras do you have around these parts? Like, about how many, and covering what angles?” You notice Sunglasses has a notepad out, pen held expectantly.
″. . . you know, I’m not sure I should tell you that,” you say slowly. “It’s not like it’s a secret exactly, the cameras or at least their bubbles are pretty visible, but I don’t think idle curiosity is a good reason to tell strangers how the bank security system works.”
“Okay, I admit it’s not just curiosity,” Sunglasses says with a charming smile. “What if I’m just concerned whether my money is going to be safe in this bank? Isn’t it reasonable to want to understand how it’s kept safe from bank robbers, and how the teller will figure out if I’m actually me when I come to withdraw money again?”
“That would be reasonable,” you answer, “and lots of people might be interested in knowing their money is safe and they’ll be able to get it back. Some of that information is public, we have a newsletter about it.”
“But not all of it’s public,” Sunglasses points out. “Every customer should care that their bank is secure. It seems like you’re new at this whole bank security thing. Look, I’m willing to help you out, make some suggestions about vault locks and camera angles, maybe recommend a good security firm. Looks like you’re using TS-53 cameras? Those were fine for ten years ago, but these days networking TLAs are faster and someone could technobabble their tachyons to break in.”
You stare at Sunglasses. “I admit I’m new at security. It would be nice to make the bank more secure, and you’re right that customers have a legitimate interest in the bank’s money being well defended. What you said sounds about the TS-53 sounds right at first pass, and you seem very confident. But I am also getting increasingly suspicious about your interests here.”
Sunglasses shakes their head disarmingly. “I solemnly swear I have a lot of experience with bank security systems, and I think there may be a weakness in the anti-bank robber measures you have here. I’ve seen a lot of good banks get robbed, and that’s why I have such strong opinions on how bank security should work. Just let me tell you how to arrange the cameras, what vault lock to install, and how to set up the night guard patrols.”
“No,” you say. “While some people do have a professional skillset around bank security, not everyone with that skillset is automatically on my team. I would not do better at keeping the customer’s money safe if I accepted help from the people most insistent on giving me help. I’m going to ask you to leave now.”
“Fine, be that way,” Sunglasses says. Then they cup their hands and yell to the other customers, “Hey everyone, this bank guard is throwing me out even though I haven’t done anything! They’re probably racist! Ya’ll should get another bank guard!”
II.
“If you once tell a lie, the truth is ever after your enemy; and there’s a lot of people out there telling lies—” Harry’s voice stopped.
“What does that have to do with Fawkes?” she said.
Harry withdrew his spoon from his cereal, and pointed in the direction of the Head Table. “The Headmaster has a phoenix, right? And he’s Chief Warlock of the Wizengamot? So he’s got political opponents, like Lucius. Now, d’you think that opposition is going to just roll over and surrender, because Dumbledore has a phoenix and they don’t? Do you think they’ll admit that Fawkes is even evidence that Dumbledore’s a good person? Of course not. They’ve got to invent something to say that makes Fawkes… not important. Like, phoenixes only follow people who charge straight at anyone they think is evil, so having a phoenix just means you’re an idiot or a dangerous fanatic.”
-Harry Potter and the Methods of Rationality, Eliezer Yudkowsky
Let’s be reductive and say there are five kinds of people in the world.
The Professionals: People with a legitimate, professional or semi-professional interest in social conflict. Divorce lawyers, social workers, therapists, the chair of the sci-fi convention complaint department.
The Curious: People with a noticeable curiosity or special interest in social conflict. These people took up reading Difficult Conversations or Non-Violent Communication or the like the way other folks decided to study Spanish opera, or U.S. Navy ships, or knitting.
The Oddballs: People with weird, outlier behaviors that trip a lot of false positives and as a result have lot of experience with conflict resolution systems. Autistics and other neuroatypicals, ethnic and religious minorities, a fair number of homeschoolers.
“Normal” people who don’t really care about social conflict as long as it’s not bothering them, and it usually doesn’t.
Bad actors: People who cause problems and try to get away with it, which here includes people who just legitimately want to pursue their hobby of punching everyone who disagrees with them in the face.[1]
(These categories absolutely overlap and intersect some of the time. Some autistics set off a bunch of yellow or even red flags, then developed a special interest in human social norms and find them interesting. Some therapists are abusive bad actors, leveraging the privacy and power of their position to do harm. This list of overlaps is not exhaustive.)
Now let’s say that the bad actor is not an idiot. They have considered what and who might stop them, and what they might do about it.
It is an obvious, straightforward move to accuse whatever system is responsible for catching them of being corrupt and the people running that system to be horrible or incompetent.
Yes, there are other reasons that someone might say the system is flawed. Yes, sometimes the people in charge of it make mistakes. Sometimes, yeah, it is actually the case that there’s a glaring problem with the way hypothetical bad actors are identified and treated. The KGB and Soviet Russia is a famous historical example, but there are many more and many smaller examples of misrun HR departments and convention safety chairs in over their heads. No, I do not think I or humanity at large have found the One True Way to correctly handle complaints and conflict. Yes, I want to improve the setups around me and to get more skilled at handling things like this. Yes, sometimes I think it is correct to try and dismantle the thing and put something better in its place.
But.
Even if you had a system that was perfectly accurate, universally applicable and flexible, whose agents were unfailingly correct in how they carried out its orders, you would have some portion of people who have an obvious motive to say the system is broken and the agents are horrible. If the bad actors had a little forethought, they wouldn’t say “the system is horrible, they won’t let me punch people in the face.” They’d say things like “the system is horrible, it thinks that innocent person punched someone in the face even though they didn’t.”
And when you don’t have a perfect system, but only a decent system with reasonable people in the position of its agents and where it didn’t quite match the local social norm but was making an honest effort, then you will wind up with a lot of things an antagonist can point at to argue that nobody should trust it.
Yes, due process and rights for the imprisoned. Yes, the system also has an incentive to smear and put away anyone who threatens to rebel. And yes, we can always try to do better. But maybe be a little suspicious when the person in sunglasses, already being dragged into the cop car, complains that the cop is just being racist and decries the legitimacy of the justice system?
(Though also remember I mostly deal with the kind of complaints you get about ACX meetups. It’s considerably less dramatic than that sentence might sound.)
III.
Firewalls [CBR03], packet filters, intrusion detection systems, and the like often have difficulty distinguishing between packets that have malicious intent and those that are merely unusual. The problem is that making such determinations is hard. To solve this problem, we define a security flag, known as the “evil” bit, in the IPv4 [RFC791] header. Benign packets have this bit set to 0; those that are used for an attack will have the bit set to 1.
When you are trying to set up your disciplinary process, justice system, network security permissions, or other system by which you will identify and handle bad actors, you should be aware that some of the people who appear to be trying to help you might have ulterior motives.
If you do not have some reason to expect that you are already good at this — if you’re one of the normal people in the bullet points above, who just wants to have a nice society or meetup group and is wondering why we can’t just do something simple and reasonable — then the bad actors probably have more experience with this than you do. Consider that you may have never interacted with a complaint department at all, while they may have been through the ban committee process from multiple different groups.
(For that matter, being the person in charge of banning others is a position with obvious appeal to someone who suspects they may come to the attention of The System sooner or later. If there’s a ninety eight normal people, one honest professional, one bad actor, and you have no way to distinguish, then you may prefer choosing your overseer via random lot rather than taking a 50⁄50 chance between your bad actor and your honest professional, even though the honest professional is really really useful if you have one.)
In Pareto Best and the Curse of Doom, I talked about how finding people with the overlaps of multiple skills is hard. To use the example of a community organizer, there’s selection pressure to have one who is a good marketer, good with handling logistics like the venue and food, and charismatic in person.
Over a long enough run and a large enough community, there’s eventually some pressure for them to be good at conflict resolution, but a group can get surprisingly big and last surprisingly long before this becomes important — and if they’re bad at it, or just normal amounts of competent, there are many ways for a group to keep growing despite constant arguments until the organizer steps away and even after.
No other part of organizing has this problem. If you don’t know what activities to run, you can ask, and people will tell you what they like. If you don’t know how to advertise the event, you can ask, and people might have helpful suggestions. If you don’t know how to book a venue, you can basically just ask, and it’s pretty unlikely anyone has a motive to sabotage your venue selection. Maybe they own the venue and they’re trying to sell it to you, but that’s a bit more straightforward. Not so with conflict resolution.
IV.
“If I could predict exactly where Stockfish 15 would move, I could defeat Magnus Carlsen just by making the moves I’d predict Stockfish would make. Maybe if your moves have sufficiently straightforward vulnerabilities there’ll be an obvious way to exploit those, as seen by a human grandmaster; but Stockfish can search more moves than any human can, better than we can, and it might find an even better way to defeat you.”
. . .
“How about if I move my rook over here?” the kid says, a few moves into the game. “Then the AI will try to take it with its queen, and I’ll come in and grab the queen. How will the AI win after I’ve got its queen?”
“Okay, now you’re failing at putting yourself in the AI’s shoes and asking how to win from its position, even using *your own* intelligence,” you tell the kid.
I don’t have a solution to this.
I keep encountering people with very strong opinions on the correct way to handle complaints and conflict. I don’t have an omniscient view of who is good at it, who is right and who is wrong. But uh. I notice that something like half of the people who have expressed very strong opinions on this to me, it turns out there’s a bunch of complaints about them and if I used the system or rules they’re advocating for they’d be in the clear[2].
(Which makes sense! If I heard lots of the people dragged away in the night by the KGB had strong opinions on how great jury trials were and that they’d have been cleared by a jury trial, that doesn’t surprise me. And yet I also wouldn’t be surprised to hear lots of the losing defendants of a healthy jury trial system have strong opinions on how the judge and the cops and the whole system are out to get them.)
If you are a good and virtuous person, you maybe are interested in how conflict resolution is done and having a part in it. If you’re a harmless nonconformist, you’re a bit more likely interested in how conflict resolution is done and having a part in it. If you are a nefarious person who wants to rob banks or punch faces, you have an obvious interest in how conflict resolution is done and having a position of trust or authority in it.
If you just want the thing to work and not be a big deal, you should be at least somewhat suspicious of the people offering to help. Not a lot suspicious! Most people are basically well meaning, I’m not advocating pervasive paranoia here. Maybe less suspicious, if you have a firmer explanation for why they know this information and why they’re interested, but remember that bad actors can lie or mislead about why they’re interested.
And this generalizes all the way upstream of the conflict. If you have some part of the system that doesn’t make or carry out the decisions, it’s just the part that’s supposed to investigate and report the truth of what happened, obviously that’s a super useful part of the system to get control of. If there’s a verification setup or a vote counting role in deciding who is supposed to investigate and report the truth then the vote counting role is a super useful part of the system to get control of, or if it can’t be controlled than discredited.
Thus the answer. Why is the ACX Meetups Czar, tasked with collecting and dispersing best practices for meetups, spending so much time writing about social conflict? Why is this the topic that creates so much stress for so many otherwise skilled organizers?
Because this is the topic that is adversarial, not just during an incident, but in every step leading up to it. If you take everyone’s advice on how to build your bank security system, you may well be doomed before the alarm sounds — if it ever does.
(Okay, but why should you trust me? Professional interest since complaint handling is part of my role, but good question and don’t be satisfied by that. CONSTANT VIGILANCE.)
- ^
There’s a tangent I plan to talk about in a future post here, but I tend to use examples which are obviously bad to do and I expect everyone to agree are bad to do. These examples tend to be unusually bad, because I’m trying to meet that standard. I could have put “will imply everyone who disagrees with them is stupid” or “will awkwardly hit on every woman attendee” instead of the face punching thing. I could have put “will get drunk and stand so close to others people can smell the alcohol on their breath” or “will loudly bring up how Chairman Mao was a great leader every single meetup, even if the event is about ice skating.”
There is an issue of distinguishing what side of a line an edge case falls on, or how hard to come down on something that’s kinda bad but not seriously bad, or how to carve out spaces where a thing that’s bad in most places is accepted here. It’s an important issue. I’m ignoring it in this essay.
- ^
Or at least more in the clear. One in a while someone will advocate for rules they’re pretty plainly breaking, but they tend to assert some interpretation where what they’re doing is fine actually.
This is a horrible situation, where excessive knowledge of some bad action X could be evidence of being:
the kind of bad actor who does X or plans to do it, or
a person who was falsely accused of doing X, or
someone who spends a lot of effort on protecting themselves or others from X, or
a nerd who happens to be obsessed with X.
Taking all of them them together, we get a group that has the best knowledge of X, and which is dangerous to approach, because too many of them are bad actors. (Also, there is a possible overlap between the groups.)
Even worse, if you decide to avoid approaching the group and just study X yourself… you become one of them.
However, having zero knowledge about X makes you an easy victim. So what are we supposed to do?
I guess the standard solution is something like “try to learn about X without coming into too much contact with the teachers (learn from books, radio, TV, internet, or visit a public lecture), and keep your knowledge about X mostly private (do things that reduce the chance of X happening to you, maybe warn your friends about the most obvious mistakes, but do not give lectures yourself)” or sometimes “find a legible excuse to learn about X (join the police force)”. Which, again, is something that the bad actor would be happy to do, too.
They say that the former criminals make the most efficient cops. I believe it also works the other way round.
I guess Jordan Peterson would say that you cannot become stronger without simultaneously becoming a potential monster. The difference between good and bad actors is how much the “potential” remains potential. Weak (or ignorant) people are less of a threat, but also less of a help.
It could help if you know the person for a long time, so you could see how the excessive knowledge of X manifests in their actual life. Difficult to do for an online community.
Like I said, I don’t have a solution. At least, not one I’m confident and certain of. I have other essays in the pipeline with (optimistically) pieces of it.
I don’t think it’s doomed. Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that’s true anyway, I’m not in that field but somehow my bank account goes un-robbed.
Checking where wildly different spheres agree seems promising. The source of advice here that I trust the most comes from a social worker who I knew for years who hadn’t heard of the rationalist community, and I asked them instead of them unprompted (or as part of an argument) starting to tell me how it should work. Put another way, getting outside perspectives is helpful- if a romantic partner seems like they might be pressuring you, describe it to a friend and see what they say.
It’s part of why I spent a while studying other communities, looking to see if there was anything that say, Toastmasters and the U.S. Marines and Burning Man and Worldcon all agreed about.
Yes, it would be useful to know how exactly that happens.
I suspect that a part of the answer is how formal employment and long-term career changes the cost:benefit balance. Like, if you are not employed as a security expert, and rob a bank, you have an X% chance of getting Y money, and Z% chance of ending up in prison. If you get hired as a security expert, that increases the X, but probably even more increases the Z (you would be the obvious first suspect), and you probably get a nice salary so that somewhat reduces the temptation of X% chance at Y. So even if you hire people who are tempted to rob a bank, you kinda offer them a better deal on average?
Another part of the answer is distributing the responsibility, and letting the potential bad actors keep each other in check. You don’t have one person overseeing all security systems in the bank without any review. One guy places the cameras, another guy checks whether all locations are recorded. One guy knows a password to a sensitive system (preferably different people for different sensitive systems), another guy writes the code that logs all activities in the system. You pay auditors, external penetration testers, etc.
There is also reputation. If someone worked in several banks, and they those banks didn’t get robbed, maybe it is safe to hire that person. (Or they play a long con. Then again, many criminals probably don’t have patience for too long plans.) What about your first job? You probably get a role with less responsibility. And they probably check your background?
...also, sometimes the banks do get robbed; they probably do not always make it public news. So I guess there is no philosophically elegant solution to the problem, just a bunch of heuristics that together reduce the risk to the acceptable level (or rather, we get used to whatever is the final level).
So… yeah, it makes sense to learn the heuristics… and there will be obvious objections… and some of the heuristics will be expensive (in money and/or time).
I think the amount of cash a bank loses in a typical armed robbery really isn’t that large compared to the amounts of money the bank actually handles—bank robbers are a nusiance but not an existential threat to the bank.
The actual big danger to banks comes from insiders; as the saying goes, the best way to rob a bank is to own one.
If you’re good at it, you can purchase the knowledge without giving them a position of power. Intelligence agencies purchase zero days from hackers on black market. Foreign spies can be turned using money to become double agents.
Any opinion on this regarding being a somewhat good solution?
https://www.lesswrong.com/posts/Q3huo2PYxcDGJWR6q/how-to-corner-liars-a-miasma-clearing-protocol
Trying to summarize the method:
list all know facts
list all competing theories
make an M×N table, highlight places where the fact contradicts the theory
require an explanation for each such place in the current theory
if a new theory is made, add a new column to the table, and evaluate all cells in the new column
I guess this mostly avoids the failure mode when someone uses an argument A to support their theory X, later under the weight of evidence B switches to a theory Y (because B was incompatible with X, but is compatible with Y), and you fail to notice that A is now incompatible with Y… because you vaguely remember that “we talked about A, and there was a good explanation for that”.
The admitted disadvantage is that it takes a lot of time.
Makes me empathize with the defender :), but let me tell you, being interrogated in an airport for six hours trying to convince a US immigration agent that I’m an oddball not a danger is not fun
All of this sounds reasonable, on the surface…
And yet I notice that the view that “people who have opinions about how [whatever] should be done are unusually likely to be bad actors who want me to do [whatever] in such a way as to benefit them, therefore I should be suspicious of their motives and suggestions” is memetically adaptive. Whenever you come across this idea, it is to your benefit to immediately adopt it—after all, it means that you will thenceforth need to spend less effort evaluating people’s suggestions and opinions, and have a new and powerful reason to reject criticism. And the idea protects itself: if someone suggests to you that this whole perspective is misguided and harmful, well, aren’t they just maliciously trying to undermine your vigilance?
Anyhow, I am not a meetup czar, so I don’t have to make the decisions that you make. And I don’t go to many meetups, so I am more or less unaffected by those decisions. I do have a bit of experience running communities, though; and, of course, the usual plethora of experience interacting with people who run communities. My own view, on the basis of all of that experience, is this:
Community members should default to the assumption that you are basically the KGB.
Your own approach and policies should work unproblematically even if everyone assumes that you are basically the KGB. (This is especially true if you are not the KGB at all.)
And if your approach to running a community is predicated on the members of that community not treating you as if you are the KGB, then you are definitely the KGB.
I have a couple of thoughts here. One is that I don’t think this is true for most values of [whatever]. If someone has suggestions about the venue, or the meetup activities, or announcement platforms, I don’t think this dynamic is in play. If I get advice on job searching or algebra homework or the best way to bake a loaf of sourdough, I’m not getting nearly as much adverse selection as for conflict resolution from within the community I’m involved in. Who has a motive to subtly sabotage my sourdough?
If someone read this essay and came away with a full general counterargument against listening to advice on any subject, my guess is there’s a big reading comprehension failure happening.
It isn’t as clearly a failure of reading comprehension if someone comes away with the idea that they shouldn’t listen to any advice on handling conflict specifically, though I think that would also be incorrect. Finding people who are trustworthy, good at handling it well, and willing to teach you is wonderful. I’ve been trying to learn the most from sources well outside the rationalist community, but I think there is good advice to be had. Just, not uncritically trusted?
Also, some people seem to think this class of problem should be easy. For those people I want to make the point that it is (at least sometimes) an adversarial situation.
Probably nobody, but then again, your sourdough is probably not impinging on anyone’s interests, either. Baking a loaf of sourdough doesn’t really come with opportunities to exploit other people for your own gain, etc. So of course there’s not going to be much controversy.
But whenever there is controversy, usually due to the existence of genuinely competing interests, then motives for sabotage become plausible, whereupon it immediately becomes tempting to declare that those who think that you ought to be doing things differently are just trying to sabotage you.
I agree, it certainly is an adversarial situation—and not only sometimes, but most of the time. And I agree that you should not uncritically trust advice that you hear from any sources. In fact, you shouldn’t even trust advice that you hear from yourself.
Consider your bank example again. You might think: “hmm, that guy has an odd amount of knowledge of, and/or interest in, internal bank practices and security and so on; suspicious!”. Then you learn that he works at a bank himself, so it turns out that his knowledge and interest aren’t suspicious after all—great, cancel that red flag.
No! Wrong! Don’t cancel it! Put it back! Raise two red flags! (“An analysis by the American Bankers Association concluded that 65% to 70% of fraud dollar losses in banks are associated with insider fraud.”) Suspect everyone, especially the people you’ve already decided to trust!
But of course “suspect” is exactly the wrong word here. If you’re having to suspect people, you’ve already lost.
Consider computer security. I ask about the security software that your company is using to protect your customers’ data—could I see the code? Which cryptographic algorithms do you use? You’re suspicious; what do I need this information for? Who should be allowed to have this sort of knowledge?
And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
The way to ensure that you don’t accidentally give the wrong person insider access to your system is to construct a system such that nobody can exploit it by having insider access.
(Another way of putting this is to say that selective methods absolutely do not suffice for ensuring the trustworthiness and integrity of social systems.)
The same is true for the problem of “from whom to take advice on conflict resolution”. You should not have to figure out the motives of the advice-giver or to decide whether to trust their advice. Your procedure for evaluating advice should work perfectly even if the advice comes from your bitter enemy who wishes nothing more than to see you fail. And then you should apply that same procedure to what you already believe and the practices you are already employing—take the advice that you would give to someone, and ask what you would think of it if it had come to you from someone of whom you suspected that they might be your worst and most cunning enemy. Is your evaluation procedure robust enough to handle that?
If it is not, then any time spent thinking about whether the source of the advice is trustworthy is pointless, because you can’t very well trust someone else more than you trust yourself, and you evaluation procedure is too weak to guard against your own biases. And if it is robust enough, then once again it is pointless to wonder whom you should trust, because you don’t have to trust anyone—only to verify.
This makes sense for computer security, but for biosecurity it doesn’t work, because it’s a lot harder to ship a patch to people’s bodies than to people’s computers. The biggest reason there has never been a terrorist attack with a pandemic-capable virus is that, with few exceptions (such as smallpox), we don’t know what they are.
See also:
In certain domains, I absolutely can and will do this, because “someone else” has knowledge and experience that I don’t and could not conveniently acquire. For example, if I hire lawyers for my business’s legal department, I’m probably not going to second-guess them about whether a given contract is unfair or contains hidden gotchas, and I’m usually going to trust a doctor’s diagnosis more than I trust my own. (The shortfalls of “Doctor Google” are well-known, so although I often do “do my own research” I only trust it so much.)
And how do you choose who the “someone else” is?
Honestly? By going to the list of doctors that my health insurance will pay for, or some other method of semi-randomly choosing among licensed professionals that I hope doesn’t anti-correlate with the quality of their advice. There are probably better ways, but I don’t know what they are offhand. ::shrug::
If you were accused of a crime and intended to plead not guilty, how would you choose a defense attorney, assuming you weren’t going to use a public defender?
So you trust yourself to decide how to select a doctor; you trust your decision procedure, which you have chosen.
I’d ask trusted friends for recommendations, because I trust myself to know whom to ask, and how to evaluate their advice.
Not sure how referential “you” vs general “you” you’re using when you’re talking about assuming some “you” is the KGB. I do think it’s useful to build a system which does not assume the watchman is perfectly trustworthy and good. In my own case, one of the first things I did once I started to realize how tricky this part of my role might be was write down a method for limited auditing of myself. That said:
I’m not sure how literally to take the “unproblematically” adverb here. If you’re being literal, then I disagree; part of my thesis here is that sometimes there will be as many problems as enemy action can cause, and they will be able to cause some problems.
(If you’re on the lookout for a fully general counterargument, here’s one I haven’t found a way around! This theory treats occasional strident complaints about the way a resolution system is operating as very little evidence that the system is operating badly, because one would expect occasional bad actors to try shaking everyone’s trust in the system even if it was a good system. And yes, that is such a suspicious theory for me in particular to put forward. Dunno what to tell you here.)
Indeed. But really, I wouldn’t say “suspicious”, exactly; I’d say “yes, it makes perfect sense that you would say this”. This isn’t even an accusation, or anything like that. It’s just the logical outcome of the setup.
The question is, can a bad actor shake everyone’s trust in the system? If they can, then is it really a good system?
The best answer to “should I trust you[r system]?” isn’t “yes, you should, and here is why”. It’s “you don’t have to”.
My current best guess about what you are trying to say is something like this: “People should give up on the idea of making systems that are resilient against bad actors on both sides. You should just give unlimited power to one side (the moderator, the meetup czar, the police...) and that’s it. Now at least the system is resilient against bad actors on one side.”
EDIT: Never mind, after reading your other comments, I guess you believe that community moderation can be solved by an algorithm. Ok, I might believe it if you show me the code.
Uh… no. Definitely not.
… what in the world?
No, I don’t believe anything like this.
Honestly, it would be hard to get further from my views on this subject than what you’ve described…
I think he means you should design a trustless system, a lá public key cryptography.
But the inference is correct, no, since you are discarding the probability mass on “innocent normie”, no?
I am not sure I follow. Could you say more? What do you mean by saying that I am discarding that probability mass?
Thanks, yes.
The original post introduced a difference between the professionals/the curious/the oddballs/normal people/bad actors.
People who have opinions about how [whatever] should be done are not normies (where [whatever] relates to defenses against bad actors)
Normies are innocent but also the large majority of people.
When learning that someone has opinions about [whatever], the prior probability of being a bad actor switches from #bad actor/(#professionals +#the curious +#the oddballs + #normal people + #bad actors) to #bad actor/(#professionals +#the curious +#the oddballs + #bad actors)
That is, you are discarding this probability: #normal people/(#professionals +#the curious +#the oddballs + #normal people + #bad actors)
Thus, the prior probability of someone interested in [whatever] being a bad actor rises.
I’m not saying anything the post isn’t saying, I’m just pointing out that the forecasting/simple Bayesian tradition of knowledge really agrees with this post. You then have further arguments around the virtue of orienting the world around happy paths and normies, but still.
Uh… sure, that’s true enough, but this logic requires that we first accept the OP’s categorization scheme—which is part of precisely the meme that I am referring to!
OP notes that “These categories absolutely overlap and intersect some of the time”. This is true, but the trouble is that taking this caveat seriously means discarding the logic of the argument.
Consider an alternate set of categories:
Professionals, who might also be bad people.
The curious, many of whom are also oddballs, and who might also be bad people.
The oddballs, who might also be bad people.
Normal people, who might also be bad.
Hmm. Doing a Bayesian calculation on this might be tricky. Perhaps we can separate out some of those, like so:
Professionals who are good people.
Professionals who are bad people.
The curious and/or oddballs who are good people.
The curious and/or oddballs who are bad people.
Normal people who are good.
Normal people who are bad.
We learn that someone has opinions about [whatever]. We now discard categories #5 and #6. But unbeknownst to us, the ratio of bad:good in the overall population was actually lower than the ratio of bad:good among normal people. So learning this fact should reduce our subjective probability of the person being bad.
Is this possible? Well, for example, suppose that it’s 1975, and you learn that a certain person has opinions about how psikhushkas should work. In particular, this individual thinks that said institutions should work differently than they do in fact work. What should you conclude about this person?
The logic in the OP is easily recognizable as the logic of every police force, every security service, and every authoritarian enforcement organization. It’s the logic that says “if you’re not one of us, then you’re either a clueless normie who will unthinkingly submit to our authority, or else you’re probably a criminal—unless you can, with great effort and in the face of considerable skepticism, prove to us (and yes, the burden of proof is entirely on you) that you’re one of the rare harmless weirdoes (emphasis on the ‘harmless’; if you give any hint that you’re challenging our authority, or make any move toward trying to change the system, then the ‘harmless’ qualifier is immediately stripped from you, and you move right back into the ‘criminal’ category)”.
(That “professionals” are much more likely than anyone else to be bad actors is another fact that drastically undermines the OP’s thesis—and this blind spot is not an accident. It’s just that “professionals” simply means “the ingroup”—“one of us”. As in, “You know the score, pal! If you’re not cop, you’re little people.”)
It seems to me that the most robust solution is to do it the hard way: know the people involved really well, both directly and via reputation among people you also know really well—ideally by having lived with them in a small community for a few decades.
This can also be done over the internet. Talk to their irl social circle.
I think the thesis should be “everyone has an opinion on how conflicts should be handled that miraculously works out so they’re right in any given conflict.”
I think analyzing different types of actors with different goals isn’t elucidating. Bad actors are explicitly self-serving; good actors are probably still a little biased and petty. Being right shouldn’t be the main thing but it probably is. It’s also easier to remember everyone self-serve biases than “this is one of 5 different types of people whose interest in conflict resolution benefits 5 different goal categories they might have.”