Assume Bad Faith
I’ve been trying to avoid the terms “good faith” and “bad faith”. I’m suspicious that most people who have picked up the phrase “bad faith” from hearing it used, don’t actually know what it means—and maybe, that the thing it does mean doesn’t carve reality at the joints.
People get very touchy about bad faith accusations: they think that you should assume good faith, but that if you’ve determined someone is in bad faith, you shouldn’t even be talking to them, that you need to exile them.
What does “bad faith” mean, though? It doesn’t mean “with ill intent.” Following Wikipedia, bad faith is “a sustained form of deception which consists of entertaining or pretending to entertain one set of feelings while acting as if influenced by another.” The great encyclopedia goes on to provide examples: the solider who waves a flag of surrender but then fires when the enemy comes out of their trenches, the attorney who prosecutes a case she knows to be false, the representative of a company facing a labor dispute who comes to the negotiating table with no intent of compromising.
That is, bad faith is when someone’s apparent reasons for doing something aren’t the same as the real reasons. This is distinct from malign intent. The uniformed solider who shoots you without pretending to surrender is acting in good faith, because what you see is what you get: the man whose clothes indicate that his job is to try to kill you is, in fact, trying to kill you.
The policy of assuming good faith (and mercilessly punishing rare cases of bad faith when detected) would make sense if you lived in an honest world where what you see generally is what you get (and you wanted to keep it that way), a world where the possibility of hidden motives in everyday life wasn’t a significant consideration.
On the contrary, however, I think hidden motives in everyday life are ubiquitous. As evolved creatures, we’re designed to believe as it benefited our ancestors to believe. As social animals in particular, the most beneficial belief isn’t always the true one, because tricking your conspecifics into adopting a map that implies that they should benefit you is sometimes more valuable than possessing the map that reflects the territory, and the most persuasive lie is the one you believe yourself. The universal human default is to come up with reasons to persuade the other party why it’s in their interests to do what you want—but admitting that you’re doing that isn’t part of the game. A world where people were straightforwardly trying to inform each other would look shocking and alien to us.
But if that’s the case (and you shouldn’t take my word for it), being touchy about bad faith accusations seems counterproductive. If it’s common for people’s stated reasons to not be the same as the real reasons, it shouldn’t be beyond the pale to think that of some particular person, nor should it necessarily entail cutting the “bad faith actor” out of public life—if only because, applied consistently, there would be no one left. Why would you trust anyone so highly as to think they never have a hidden agenda? Why would you trust yourself?
The conviction that “bad faith” is unusual contributes to a warped view of the world in which conditions of information warfare are rationalized as an inevitable background fact of existence. In particular, people seem to believe that persistent good faith disagreements are an ordinary phenomenon—that there’s nothing strange or unusual about a supposed state of affairs in which I’m an honest seeker of truth, and you’re an honest seeker of truth, and yet we end up persistently disagreeing on some question of fact.
I claim that this supposedly ordinary state of affairs is deeply weird at best, and probably just fake. Actual “good faith” disagreements—those where both parties are just trying to get the right answer and there are no other hidden motives, no “something else” going on—tend not to persist.
If this claim seems counterintuitive, you may not be considering all the everyday differences in belief that are resolved so quickly and seamlessly that we tend not to notice them as “disagreements”.
Suppose you and I have been planning to go to a concert, which I think I remember being on Thursday. I ask you, “Hey, the concert is on Thursday, right?” You say, “No, I just checked the website; it’s on Friday.”
In this case, I immediately replace my belief with yours. We both just want the right answer to the factual question of when the concert is. With no “something else” going on, there’s nothing stopping us from converging in one step: your just having checked the website is a more reliable source than my memory, and neither you nor the website have any reason to lie. Thus, I believe you; end of story.
In cases where the true answer is uncertain, we expect similarly quick convergence in probabilistic beliefs. Suppose you and I are working on some physics problem. Both of us just want the right answer, and neither of us is particularly more skilled than the other. As soon as I learn that you got a different answer than me, my confidence in my own answer immediately plummets: if we’re both equally good at math, then each of us is about as likely to have made a mistake. Until we compare calculations and work out which one of us (or both) made a mistake, I think you’re about as likely to be right as me, even if I don’t know how you got your answer. It wouldn’t make sense for me to bet money on my answer being right simply because it’s mine.
Most disagreements of note—most disagreements people care about—don’t behave like the concert date or physics problem examples: people are very attached to “their own” answers. Sometimes, with extended argument, it’s possible to get someone to change their mind or admit that the other party might be right, but with nowhere near the ease of agreeing on (probabilities of) the date of an event or the result of a calculation—from which we can infer that, in most disagreements people care about, there is “something else” going on besides both parties just wanting to get the right answer.
But if there’s “something else” going on in typical disagreements that look like a grudge match rather than a quick exchange of information resulting in convergence of probabilities, then the belief that persistent good faith disagreements are common would seem to be in bad faith! (Because if bad faith is “entertaining [...] one set of feelings while acting as if influenced by another”, believers in persistent good faith disagreements are entertaining the feeling that both parties to such a disagreement are honest seekers of truth, but acting otherwise insofar as they anticipate seeing a grudge match rather than convergence.)
Some might object that bad faith is about conscious intent to deceive: honest reporting of unconsciously biased beliefs isn’t bad faith. I’ve previously expressed doubt as to how much of what we call lying requires conscious deliberation, but a more fundamental reply is that from the standpoint of modeling information transmission, the difference between bias and deception is uninteresting—usually not relevant to what probability updates should be made.
If an apple is green, and you tell me that it’s red, and I believe you, I end up with false beliefs about the apple. It doesn’t matter whether you said it was red because you were consciously lying or because you’re wearing rose-colored glasses. The input–output function is the same either way: the problem is that the color you report to me doesn’t depend on the color of the apple.
If I’m just trying to figure out the relationship between your reports and the state of the world (as contrasted to caring about punishing liars while letting merely biased people off the hook), the main reason to care about the difference between unconscious bias and conscious deception is that the latter puts up much stronger resistance. Someone who is merely biased will often fold when presented with a sufficiently compelling counterargument (or reminded to take off their rose-colored glasses); someone who’s consciously lying will keep lying (and telling ancillary lies to cover up the coverup) until you catch them red-handed in front of an audience with power over them.
Given that there’s usually “something else” going on in persistent disagreements, how do we go on, if we can’t rely on the assumption of good faith? I see two main strategies, each with their own cost–benefit profile.
One strategy is to stick the object level. Arguments can be evaluated on their merits, without addressing what the speaker’s angle is in saying it (even if you think there’s probably an angle). This delivers most of the benefits of “assume good faith” norms; the main difference I’m proposing is that speakers’ intentions be regarded as off-topic rather than presumed to be honest.
The other strategy is full-contact psychoanalysis: in addition to debating the object-level arguments, interlocutors have free reign to question each other’s motives. This is difficult to pull off, which is why most people most of the time should stick to the object level. Done well, it looks like a negotiation: in the course of discussion, pseudo-disagreements (where I argue for a belief because it’s in my interests for that belief to be on the shared map) are factorized out into real disagreements and bargaining over interests so that Pareto improvements can be located and taken, rather than both parties fighting to distort the shared map in the service of their interests.
For an example of what a pseudo-disagreement looks like, imagine that I own a factory that I’m considering expanding onto the neighboring wetlands, and you run a local environmental protection group. The regulatory commission with the power to block the factory expansion has a mandate to protect local avian life, but not to preserve wetland area. The factory emits small amounts of Examplene gas. You argue before the regulatory commission that the expansion should be blocked because the latest Science shows that Examplene makes birds sad. I counterargue that the latest–latest Science shows that Examplene actually makes birds happy; the previous studies misheard their laughter as tears and should be retracted.
Realistically, it seems unlikely that our apparent disagreement is “really” about the effects of Examplene on avian mood regulation. More likely, what’s actually going on is a conflict rather than a disagreement: I want to expand my factory onto the wetlands, and you want me to not do that. The question of how Examplene pollution affects birds only came into it in order to persuade the regulatory commission.
It’s inefficient that our conflict is being disguised as a disagreement. We can’t both get what we want, but however the factory expansion question ultimately gets resolved, it would be better to reach that outcome without distorting Society’s shared map of the bioactive properties of Examplene. (Maybe it doesn’t affect the birds at all!) Whatever the true answer is, Society has a better shot at figuring it out if someone is allowed to point out your bias and mine (because facts about which evidence gets promoted to one’s attention are relevant to how one should update on that evidence).
The reason I don’t think it’s useful to talk about “bad faith” is because the ontology of good vs. bad faith isn’t a great fit to either discourse strategy.
If I’m sticking to the object level, it’s irrelevant: I reply to what’s in the text; my suspicions about the process generating the text are out of scope.
If I’m doing full-contact psychoanalysis, the problem with “I don’t think you’re here in good faith” is that it’s insufficiently specific. Rather than accusing someone of generic “bad faith”, the way to move the discussion forward is by positing that one’s interlocutor has some specific motive that hasn’t yet been made explicit—and the way to defend oneself against such an accusation is by making the case that one’s real agenda isn’t the one being proposed, rather than protesting one’s “good faith” and implausibly claiming not to have an agenda.
The two strategies can be mixed. A simple meta-strategy that performs well without imposing too high of a skill requirement is to default to the object level, and only pull out psychoanalysis as a last resort against stonewalling.
Suppose you point out that my latest reply seems to contradict something I said earlier, and I say, “Look over there, a distraction!”
If you want to continue sticking to the object level, you could say, “I don’t understand how the distraction is relevant to resolving the inconsistency in your statements that I raised.” On the other hand, if you want to drop down into psychoanalysis, you could say, “I think you’re only pointing out the distraction because you don’t want to be pinned down.” Then I would be forced to either address your complaint, or explain why I had some other reason to point out the distraction.
Crucially, however, the choice of whether to investigate motives doesn’t depend on an assumption that only “bad guys” have motives—as if there were bad faith actors who have an angle, and good faith actors who are ideal philosophers of perfect emptiness. There’s always an angle; the question is which one.
- And All the Shoggoths Merely Players by 10 Feb 2024 19:56 UTC; 160 points) (
- 31 Aug 2023 18:39 UTC; 53 points) 's comment on Introducing the Center for AI Policy (& we’re hiring!) by (
- On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche by 9 Jan 2024 23:12 UTC; 44 points) (
- 22 Feb 2024 20:25 UTC; 1 point) 's comment on mike_hawke’s Shortform by (
I agree that self-deception is common. But there are at least three reasons why assuming good faith is still a useful strategy:
You shouldn’t just think of the assumption of good faith as a reaction to other people’s lack of self-deception; you should also think of it as a way of mitigating your own self-deception. If you assume bad faith, then it’s very easy to talk yourself into doing all sorts of uncharitable or uncooperative rhetorical moves, like lying to them, or yelling at them, or telling them that they only have their position because they’re a bad person. You can tell yourself that you need to work around the self-deceptive parts of them, by pushing the right buttons to make the interaction productive. Yet a lot of these rhetorical moves will in fact be driven by your own hidden motivations, like your desire to avoid backing down, or your desire to punish the outgroup. So assuming good faith gives those motivations less cover.
Discussions are an iterated game, and it’s easy for one person to accidentally do something which is interpreted by the other as a sign of bad faith, which causes the second person to respond in kind, and so on. Assuming good faith is like adding (limited) amounts of forgiveness to this tit for tat interaction.
While everyone has hidden motives, it’s hard to know which hidden motives are at play in any given discussion. So when Zack says “This is difficult to pull off, which is why most people most of the time should stick to the object level”, this can be seen as another way of saying “actually, as a strong heuristic, assume good faith”.
Having said all this: some people should assume more good faith than they currently do; others less; and it’s hard to know where the line is.
I fully agree with the initial premise that bias is common. I don’t see how this supports your conclusions; especially:
(1) You say that the difference between bias and deception is uninteresting, because the main case where you might care is that bias is more likely to fold against strong counter-argument. But isn’t this case exactly what people are using it for?
If I’m having a disagreement with you, and I think I could make a strong argument (at some cost in time/effort), then the question of whether you will fold to a strong argument seems like the central question in the decision to either make that argument or simply walk away.
But I thought the central point of this post was to argue that we should stop using “bad faith” as a reason for walking away?
(2) In your final example (where I point out that you’ve contradicted yourself and you say “look, a distraction!”), I don’t see how either of your proposed responses would prevent you from continuing with “look, another distraction!”
You suggest we could stick to the object level and then the process emitting the outputs would be irrelevant. But whether we’re caught in an infinite loop seems pretty important to me, and that depends crucially on whether the distraction was strategic (in which case you’ll repeat it as often as you find it helpful) or inadvertent (in which case I can probably get you back on topic).
If you are committed to giving serious consideration to everything your interlocutor says, then a bad actor can tie you up indefinitely just by continuing to say new things. If you don’t want to be tied up indefinitely, your strategy needs to include some way of ending the conversation even when the other guy doesn’t cooperate.
(3) In your example of a pseudo-disagreement (about expanding a factory into wetlands), you say it’s inefficient that the conflict is disguised as a disagreement. But your example seems perfectly tailored to show that the participants should use that disguise anyway, because the parties aren’t engaged in a negotiation (where the goal is to reach a compromise) they are engaged in a contest before a judge (the regulatory commission) who has predetermined to decide the issue based on how it affects the avian life. If either side admits that the other side is correct about the question of fact then the judge will decide against them.
Complaining that this is inefficient seems a bit like complaining that it is inefficient for the destruction of factories to reduce a country’s capacity for war, and war would be more efficient if there were no incentives to destroy factories. The participants in a war cannot just decide that factories shouldn’t affect war capacity; that was decided by the laws of physics.
My sense was Zack mostly wasn’t talking about walking away, but instead talking about how people should relate to conversational moves when trying to form beliefs.
I do think “when to walk away” is a pretty important question not addressed here.
I think maybe… “this seems like bad faith” might be too specific of a reason? If you can’t list a more specific reason, like “you seem biased in way-X which is causing problem-Z”, then I think it might be worse to latch onto “bad faith” as the problem rather than “idk, this just feels off/wrong to me but I can’t explain why exactly.”
A related issue not addressed here is “What to do when someone is persistently generating ‘distractions’ that many people feel initially persuaded by, and which take time to unravel?”. Alice says [distracting thing], Bob says “that doesn’t seem relevant / I think you’re trying to distract us because X”, and then Charlie says “well, I dunno I think distracting thing is relevant”, and Bob patiently explains “but it’s not actually relevant because Y”, and Charlie says “oh, yeah I guess” and then Alice says [another distracting thing], and Charlie (or Dave) says “yeah that does seem relevant too” and Bob says “aaaaugh do you guys not see the pattern of Alice saying subtly wrong things that seem persistently avoiding the issue?”
Zack says in his intro that “[people think] that if you’ve determined someone is in bad faith, you shouldn’t even be talking to them, that you need to exile them” and then makes the counter-claim that “being touchy about bad faith accusations seems counterproductive...it shouldn’t be beyond the pale to think that of some particular person, nor should it necessarily entail cutting the ‘bad faith actor’ out of public life.”
That sounds to me like a claim that you shouldn’t use bad faith as a reason to disengage. Admittedly terms like “exile” have implications of punishment, while “walk away” has implications of cutting your losses, but in both cases the actual action being taken is “stop talking to them”, right?
Also note that Zack starts with the premise that “bad faith” refers to both deception and bias, and then addresses a “deception only” interpretation later on as a possible counter-claim. I normally use “bad faith” to mean deception (not mere bias), my impression is that’s how most people use it most of the time, and that’s the version I’m defending.
(Though strong bias might also be a reason to walk away in some cases. I am not claiming that deception is the only reason to ever walk away.)
I’ll grant that “just walk away from deceivers” is a bit simplistic. I think a full treatment of this issue would need to consider several different goals you might have in the conversation (e.g. convincing the other side, convincing an audience, gathering evidence for yourself) and how the deception would interact with each of them, which seems like it would require a post-length analysis. But I don’t think “treat it the same as bias” is strategically correct in most cases.
I agree and I think all these strategies have that:
Stick to the object level → “We are going in circles, goodbye”. This is “meta” in that it is a conversation about the conversation, but it matches Zach’s description of the strategy: it does not address the speaker’s angle in raising distractions, and sticks to the object level that the distractions have no merit as arguments.
Full-contact psychoanalysis → “I see that you don’t want to be pinned down, and probably resolving this contradiction today would be too damaging to your self image. I have now sufficiently demonstrated my intellectual dominance over you to those around us, and I am leaving to find a more emotionally fulfilling conversation with someone more conventionally attractive”. Maybe someone who thinks this is a good strategy can give better words here. But yes, you sure can exit conversations while speculating about the inner motivations of the person you are speaking too.
Assume good faith → “You seem very distractible today, let’s continue this tomorrow. Have a great evening!”. This isn’t much of a stretch. Sometimes people are tired, or stressed, or are running low on their stimulant of choice, and then they’re hard to keep focused on a topic, and it’s best to give up and try again later. Possibly opening with a different conversational strategy.
My concern isn’t “what words do you say when you leave”, it’s “how do you decide when to leave”.
If I tell you that the local bar is giving out free beer tonight, because I just made that up, have I committed deliberate deception? I don’t know that that statement is false. I just have no knowledge at all about the state of the bar tonight, coupled with some priors which suggest that free beer is unlikely. But if by “deception” you mean “I know X is false and I said X anyway”, I haven’t tried to deceive you at all.
So it doesn’t make sense to limit the concept of bad faith to deliberate deception.
I would consider that deliberate deception, yes. I interpret “deception” to mean something like “actions that are expected or intended to make someone’s beliefs less accurate”.
The technical name, for a statement made with no concern for its truth or falsehood, is bullshit.
You would be deceiving someone regarding the strength of your belief. You know your belief is far weaker than can be supported by your statement, and in our general understanding of language a simple statement like ‘X is happening tonight’ is interpreted as having a strong degree of belief.
If you actually truly disagree with that, then it wouldn’t be deception, it would be miscommunication, but then again I don’t think someone who has trouble assessing approximate Bayesian belief from simple statements would be able to function in society at all.
I think this sentence is assuming a one-way conversation?
Yes, if you give a talk, and I watch it later on YouTube, then I agree that I shouldn’t care too much whether you are sincerely motivated to speak the truth but you have been led astray by self-serving rationalizations, versus you explicitly don’t care about the truth and are just mouthing whatever words will make you look good.
But if we’re in a room together, having a back-and-forth conversation, then those are two very different situations. For example, deception-versus-bias is relevant to my prospects for changing your mind via object-level discussion.
Hmm, actually, even in the one-way-conversation case, deception-vs-bias has nonzero relevance. For example, if I know that you are biased but not deceptive, and you make a very unambiguous claim that I know you know, but that I can’t check for myself (e.g. if you say “I have eaten calamari before”), then I should put more credence on that claim, if I know you’re biased but not deceptive, compared to the other way around.
Here’s another perspective: I find the terms “good faith” / “bad faith” to be incredibly useful in everyday life, and I’m not sure how you would explain that fact. Do you think I’m insufficiently cynical or something?
Can you go into more examples/details of how/why?
I’m not Steven, but I know a handful of people who have no care for the truth and will say whatever they think will make them look good in the short term or give them immediate pleasure. They lie a lot. Some of them are sufficiently sophisticated to try to only tell plausible lies. For them debates are games wherein the goal is to appear victorious, preferably while defending the stance that is high status. When interacting with them, I know ahead of time to disbelieve nearly everything they say. I also know that I should only engage with them in debates/discussions for the purpose of convincing third party listeners.
It is useful to have a term for someone with a casual disregard for the truth. Liar is one such word, but also carries the connotation of accusing them that the specific thing they are saying in the moment is factually incorrect—which isn’t always true with an accusation of bad faith. They’re speaking without regard to the truth, and sometimes the truth aligns with their pleasure, and so they say the truth. They’re not averse to the truth, they just don’t care. They are arguing in bad faith.
They’re bullshitters.
“Both in lying and in telling the truth people are guided by their beliefs concerning the way things are. These guide them as they endeavour either to describe the world correctly or to describe it deceitfully. For this reason, telling lies does not tend to unfit a person for telling the truth in the same way that bullshitting tends to. …The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”
—Harry G. Frankfurt, On Bullshit
Note that bullshitting is only one subtype of bad faith argument. There are other strategies of bad faith argument that don’t require making untrue statements, such as cherry picking, gish galloping, making intentional logical errors, or being intentionally confusing or distracting.
Oh yes, of course. I was only talking about the people Stephen mentioned, “who have no care for the truth and will say whatever they think will make them look good in the short term or give them immediate pleasure”.
I think this and some previous posts have been part of a multi-year attempt to articulate a worldview that our culture doesn’t already have standard language for; cynical is close, but not quite right.
Do you ever get accused of bad faith? How do you respond? I always want to point at the Wikipedia definition and say, “Can you be more specific? What motivation do you think I have here I haven’t been explicit about? I’m happy to clarify!”
Crucially, it’s not the case that there’s never going to be anything to clarify. Sometimes when I’m in a discussion on Twitter with someone who’s more of an ideologue than I am, I end up choosing my words carefully to make it sound like I’m “on their side” to a greater extent than I really am, because I’m afraid that they’d slam the door on me if they knew what I was really thinking, but they might listen to the intellectually substantive point I’m trying to make if I talk like a fellow traveler. (This is also point #7 of Scott Alexander’s nonfiction writing advice.)
Is that bad faith, or just effective rhetoric? I think it’s both! And I think human life is full of these little details where the way people naturally talk and behave needs to be explained in terms of social and political maneuvering, and you can’t just “not do that” because it’s not clear what exactly that would entail. (Refusing to signal is like refusing to allow time to pass.) The closest I can get to “not doing that” is by going meta on it—writing about it in posts like this, lampshading it when I can afford to.
Yes it’s possible to be more specific than “good faith” / “bad faith” but that doesn’t mean those phrases aren’t communicating something substantive and useful, right? By the same token, every possible word and phrase and sentence could be elaborated into something more specific.
I also agree that there are edge cases, but again, that’s a near-universal property of using language to communicate.
Here’s my defense of good faith / bad faith.
STEP 1: Conscious / endorsed / ego-syntonic motives are not the only kind of motive, let alone the only influence on behavior, but they are a motive and an influence on behavior, and a particularly important one for many purposes.
For example, if I have a conscious / endorsed / ego-syntonic motive to murder you but find myself with cold feet, that’s a different situation than if I have a conscious / endorsed / ego-syntonic motive to not murder you but have anger and poor self-control. You very much care which one it is, when deciding what to say to me, guessing what’s gonna happen in the future, etc., even though both situations could be described as “you have some sources of desire to murder me and other sources of desire to not murder me”. That’s why we have terms like “self-control” and “self-awareness” that are relevant to how strongly one’s conscious / endorsed / ego-syntonic motives determine behavior. It’s a common thing to be thinking about.
STEP 2: Conscious / endorsed / ego-syntonic motives can vary continuously along many dimensions, but “good-faith” / “bad-faith” tends to label two opposite ends of one important such dimension of variation, in a generally pretty clear way (in context). As in the fallacy of gray, the existence of a spectrum does not reduce the usefulness of labeling its opposite ends.
I can’t immediately recall any specific examples from my own life.
I would probably explicitly spell out what I interpret the accusation of bad faith to mean, and then say that this accusation is not true (if it isn’t). For example, if I wrote a critical book review, and someone said I was criticizing the book in bad faith, maybe I would say “I understand your comment to be something like: You think I set out with an explicit goal to do a hit job on this book, because I’m opposed to its conclusions, and I am just saying whatever will make the book look bad without regard to being fair and honest. If that is what you mean, then…”
…and then maybe I would say “that’s totally wrong. I came in really hoping and expecting to like this book, and was surprised and disappointed about how bad it was.” or maybe I would say “I admit that I felt really defensive when reading the book, but I did really earnestly try to give it a fair shake and to be scrupulously honest in my review, and I feel bad if I fell short of that” or whatever.
So yeah, it’s not like there’s no gray area or scope to elaborate. But the original “bad faith” accusation is a perfectly good starting point from which to elaborate if necessary. By the same token, if I say “you’re confused about X”, then yeah that could sure benefit from elaboration, but that doesn’t mean I was doing something wrong by saying that in the first place, or that we should drop the word “confused” from our vocabulary. Conversations always rely on lazy evaluation—you clarify things when it turns out that they’re both unclear to the other person and important to the conversation. You can’t just preemptively spell out every detail. It’s not practical.
I just did a quick search of my public writings for “good faith” / “bad faith”. The first three I found were this comment, and a footnote in that same post, and a comment here. All three used the term “good faith” rather than “bad faith”. When I think about it, I guess I do use “good faith” more often than “bad faith”. I usually use “good faith” as meaning something similar to “earnestly” and “actually trying” and “acting according to explicit motivations that even my opponents would endorse”.
Ah, here’s an example of me saying “bad faith”. You might find this interesting: I actually wrote ““Hype” typically means Person X is promoting a product, that they benefit from the success of that product, and that they are probably exaggerating the impressiveness of that product in bad faith (or at least, with a self-serving bias).” Note that I call out “bad faith” and “with a self-serving bias” as two different things, one implicitly worse than the other. The salesperson who knowingly lies and misleads from an explicit goal to advance his own career is one thing, the salesperson who sincerely (albeit incorrectly) believes his product will help the customer and is explicitly acting from that motivation is a different thing, and it is useful to distinguish one from the other, even if there’s a spectrum between them with gray area in between.
I do sometimes run into good faith disagreements, they feel very different from typical disagreements.
Example 1: someone says something on the phone about “Simon building a computer science model of his father’s situation”. I think it’s referring to Simon A and a friend thinks it’s Simon B. We bet and call the person back and I was wrong and feel silly.
Example 2: Someone says that standard financial prudence calls for minimizing market beta. I think this is wrong and it calls for increasing it in most cases. Then I look it up and realize it’s because you can lever up a low market beta to a high one.
It feels really different from a normal non-trivial disagreement. In both cases it felt at the time like I was very probably correct and it was confusing that the other person disagreed. There is usually some mix of bad faith and good faith in typical cases, and that makes the resolution a lot slower.
If I meet a friend at a café to talk, then there’s nothing hidden about a key motivation being the social interaction with the friend. If the friend would say “You are talking in bad faith because I thought you just care about learning information from me but then you are actually here to enjoy the social interaction with me” that would feel very weird to me.
Partial notes so far:
So, things I explicitly agree with:
“You’re acting in bad faith” is pretty non-specific and much less useful.
The spectrum between “object level” and “full-contact psychoanalysis” is a useful frame for considering how to deal with problems of bias/deception/motivated-cognition/etc. (I don’t know that it’s the only frame, but seems like a nice default if you haven’t thought of a better one).
I think the “generally stick to object level, bring out the psychoanalysis when it seems particularly important” is a pretty good BATNA.
Thing I agree with although I’m not sure it has quite the same implication you argue:
“Most people are doing some flavor/degree of bias/duplicitousness/motivation/deception most of the time.”
It seems true, although it feels Fallacy of Gray-ish? The question is how often this comes up to a degree that really gets in the way. It seems right that one should at least consider “hmm, this is a spectrum that is almost never on the “zero” degree, so maybe I should hypothesize that its ‘significant’ more often.” But it feels like this post is implying a higher degree of “assume things are at a ‘significant’ end of the spectrum.”
I’m not sure how much of a direct response this post is to Assuming Positive Intent, but I want to revisit So8res’ stated definition there:
This feels somewhat different from what you’re talking about here. Whether you intended this post as part-of-that-conversation or not, I think tendency to conflate the thing-you’re-talking-about-AFAICT and This Thing feels important to notice.
There is a big difference between the apple colours and concert dates and most typical disagreements: namely that for apples and concerts, there is ridiculously strong, unambiguous evidence that one side is correct.
Looking at QM interpretations, for example, if a pilot wave theory advocate sits down with a Many-worlds person, I wouldn’t expect them to reach an agreement in an afternoon, because the problem is too bloody complicated, and there is too much evidence to sift through. In addition, each person will have different reactions to each piece of evidence, as they start off with different beliefs and intuitions about the world. I don’t think it’s “bad faith” that people are not identical bayesian clones of each other.
Of course, I do agree that oftentimes the reasons for interpreting evidence differently are influenced by bias, values, and self-interest. It’s fair to assume that, say, a flat-earther will not be won over by rational debate. But most disagreements are in the murky middle, where the more correct sides is unlikely to win outright, but can shift the other person a little bit over to the correct side.
I think this is the clearest articulation of this problem I’ve seen. I think it helps me understand what was going on with Bad intent is a disposition, not a feeling and Assuming Positive Intent. (I think “Assuming Positive Intent” wasn’t quite talking about the same thing this post is talking about, but was heavily overlapping. I feel better able to think about the differences now)
I think I basically agree with everything written here as-stated, but will think for a bit and try to check/flesh-out my updated model.
I generally like the move of “talk about cost-benefit analyses” rather than arguing ‘shoulds’.
I think that this post relies too heavily on a false binary. Specifically, the description of all arguments as “good faith” or “bad faith” completely ignores the (to my intuition, far likelier) possibility that most arguments begin primarily (my guess is 90% or so, but maybe I just tend not to hold arguments with people below 70%) good faith, then people adjust according to their perception of their interlocutor(s), audience (if applicable), and the importance of the issue being argued. Common signals of arguments in particularly bad faith advanced by otherwise intelligent people include persistent reliance on most listed logical fallacies (dismissing that criticism and keeping a given point after its fallacious nature is clearly explained; sealioning and whataboutism are prototypical exemplars), moving the goalposts, and ignoring all contradictory evidence.
Another false binary: this also ignores the possibility of both sides in an argument being correct (or founded on correct factual data). For example, today I spent perhaps 10 minutes arguing over whether a cheese was labeled as Gouda or not, because I’d read the manufacturer’s label which did not contain that word but did say “Goat cheese of Holland” and my interlocutor read the price label from Costco which called it “goat Gouda.” I’m marginally more correct because I recognized the contradiction in terms (in the EU gouda can only be made from milk produced by Dutch cows), but neither of us was lying or arguing in bad faith, and yet I briefly struggled to believe that we were inhabiting the same reality and remembering it correctly. They were a very entertaining 10 minutes, but I wouldn’t want to have that kind of groundless argument more than once or twice a week, and that limit assumes a discussion of trivial topics as opposed to an ostensibly sincere debate on something which I hold dear.
“You can’t tell what someone is doing by watching what they’re doing.”
Bad faith and merely occluded thought processes are distinguished not by passive observation of what comes out of one’s interlocutor’s mouth, but by what happens when you push in various ways.
I think that I largely agree with this post. I think that it’s also a fairly non-trivial problem.
The strategy that makes the most sense to me now is that one should argue with people as if they meant what they said, even if you don’t currently believe that they do.
But not always—especially if you want to engage with them on the point of whether they are indeed acting in bad faith, and there comes a time when that becomes necessary.
I think pushing back against the norm that it’s wrong to ever assume bad faith is a good idea. I don’t think that people who do argue in bad faith do so completely independently—for two reasons—the first is simply that I’ve noticed it clusters into a few contexts, the second is that acting deceptively is inherently more risky than being honest, and so, it makes more sense to tread more well-trodden paths. More people aiding the same deception gives it the necessary weight.
It seems to cluster among things like morality (judgements about people’s behaviors), dating preferences (which are kind of similar), and reputation. There is kind of a paradox I’ve noticed in the way that people who tend to be kind of preachy about what constitutes good or bad behavior will also be the ones who argue that everyone is always acting in good faith (and thus chastise or scold people who want to assume bad faith sometimes).
People do behave altruistically, and they also have reasons to behave non-altruistically too, at times (whether or not it is actually a good idea for them personally). The whole range of possible intentions is native to the human psyche.
I’m not fully clear on the concrete difference between “assume good faith” and “stick to the object level”, as instrumental strategies. I’ll use one of Zach’s examples, written as a dialog. Alice is sticking to the object level. I’m imagining that she is a Vulcan and her opinions of Zach’s intentions are inscrutable except for the occasional raised eyebrow.
Alice: “Your latest reply seems to contradict something you said earlier.”
Zach: “Look over there, a distraction!”
Alice: “I don’t understand how the distraction is relevant to resolving the inconsistency in your statements that I raised.”
Here is my attempt at the same conversation with Bob, who aggressively assumes good faith.
Bob: “Is there a contradiction between your latest reply and this thing you said earlier?”
Zach: “Look over there, a distraction!”
Bob: “I’d love to talk about that later, but right now I’m still confused about what you were saying earlier, can you help me?”
Is that the type of thing? Bob is talking as if Zach has a heart of gold and the purest of intentions, whereas Alice is talking as if Zach is a non-sentient text generator. In both cases admitting that you’re doing that isn’t part of the game. Both of them are modeling Zach’s intentions, at least subconsciously. Both are strategically choosing not to leak their model of Zach to Zach at this stage of the conversation. Both are capable of switching to a different strategy as needed. What are the reasons to prefer Alice’s approach to Bob’s?
To be clear, I completely agree that assuming good faith is a disaster as an epistemic strategy. As well as the reasons mentioned above, brains are evolutionarily adapted to detect hidden motives and generate emotions accordingly. Trying to fight that is unwise.
A clarification:
Consider the premises (with scare quotes indicating technical jargon):
“Acting in Bad Faith” is Baysean evidence that a person is “Evil”
“Evil” people should be shunned
The original poster here is questioning statement 1, presenting evidence that “good” people act in bad faith too often for it to be evidence of “evil.”
However, I belive the original poster is using a more broad definition of “Acting in Bad Faith” than the people who support premise 1.
That definition, concisely, would be “engaging in behavior that is recognized in context as moving towards a particular goal, without having that goal.” Contrast this with the OP quote: bad faith is when someone’s apparent reasons for doing something aren’t the same as the real reasons. The persons apparent reasons don’t matter, what matters is the socially determined values associated with specific behaviors, as in the Wikipedia examples. While some behavior (eg, a conversation) can have multiple goals, some special things (waving a white flag, arguing in court, and now in the 21st century that includes arguing outside of court) have specific expected goals (respectively: allowing a person to withdraw from a fight without dying, to present factual evidence that favors one side of a conflict, and to persuade others of your viewpoint). When an actor fails to hold those generally understood goals, that disrupts the social contract and “we” call it “Acting in Bad Faith”
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
There is something slippery about the ways you write your posts, I’ve read several of them and that’s the impression I’m constantly being left with. You make a couple of generally true statements then apparently jump to a conclusion which doesn’t actually follow from them but feels somewhat relevant.
Here, as a demonstration, I try to hold the general fabula of your argument but replace bad faith with not having human values:
I think it’s clear that the last sentence doesn’t actually follow from the previous two paragraphs. The fact that most of the reality doesn’t have human values doesn’t mean that we shouldn’t create norms promoting them and clever plans that would actually make the reality more aligned with human values. There is a self-fulfiling component here. Our agency and coordination can make human values more widespread only if we actually try to do it.
Likewise, the fact that there are reasons why people generally do have hidden motives, and most conversations are not honest attempts to find truth between two unbiasly disagreeing parties doesn’t mean that we shouldn’t try to uphold good faith promoting norms: demand good faith and do our best to engage with as much good faith as we can muster, while talking to other people who do the same. After all, if no one does it then there won’t be any good faith engagement at all.
It’s an obvious case of iterated prisoners dilemma. Cooperation means doing your best to engage with the point of discussion in an honestly truth seeking way. Defection—propagating your own view point by any means necessary. Clearly, cooperating with a defector isn’t helpful. So It’s useful to have norms to disincentivise defection, exiling them from the conversation.
In practise, there is a whole spectrum between being perfectly good faith and pushing your own agenda by every dirty trick in the book. Some people just do not know how to do better, some people are too much under control of their biases, some people can be just having fun by trolling. There can be different reasons why people are not engaging in the best possible faith, and maliciousness of intent is irrelevant. Whatever your reasons if you can’t uphold the required level—there is nothing to talk about. Please do better and come again.
And of course it makes sense to have some compassion here and forgive occasional slips when it seems that the person is honestly trying to engage in good faith. Just as in general with iterated prisonners dilemma with small error probability.
It seems like you’re conflating acting in good faith with assuming that other people are acting in good faith.
You’re saying that we should act in good faith. Zack is saying we shouldn’t assume that other people are acting in good faith.
Is there actually a disagreement?
Part of the acting in good faith is indeed assuming that your partner is also acting in good faith. You literally have faith in them in this regard. Unless of course, they gave you substantial reasons to think that they are not upfolding their part of the bargain. Good faith is supposed to be the initial assumption. And then you can be updated by the evidence.
I think the logic of prisonners dilemma clearly shows why “assuming good faith” and “acting in good faith” can’t be properly separated from each other. If you do not not assume that the other person will cooperate you do not have any reason to cooperate in return. If you are not actually thinking that cooperation is possible and yet you still try to cooperate, you are just behaving irrationally.
Not if you define the terms the way the OP defined them. If you see acting in good faith as being focused on learning what’s true, falsely assuming that your partner is acting in good faith is a hindrance.
That’s not even slightly what the terms “good faith” / “bad faith” mean. Zack explains very clearly what’s being referred to, and you’re ignoring that in favor of your own idiosyncratic definition. That’s not a disagreement—it’s a mistake on your part.
Dictionary editors are not the Legislators of Language. Zach notices that common usage doesn’t exactly fit the dictionary. Then he notice that dictionary meaning probably doesn’t carve reality by its joints.
If there is a mistake her it’s on the part of the dictionaries for not capturing the way humans use the words and the way the reality is jointed. Then he goes on how if we accept the dictionary definition at face value being touchy about bad faith accusation doesn’t make any sense, we should assume bad faith and that acting in bad faith is normal. Either that or we should abandon the terms all together as meaningless.
I explain the way the words are actually being used, with the connection between acting in good faith, expecting good faith and demanding good faith grounded in the logic of prisoners dilemma. This common usage doesn’t have all the disadvantages that Zach mentioned. It seem to carve reality properly. So we should just use the better definition instead of abondoning the terms.
I think this elucidates the “everyone has motives” issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I’m confused how psychoanalysis is supposed to work without devolving.
For example, let’s say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?
In the simple case, I know what my intentions are and they either trust me when I tell them or they don’t.
It’s the cases when people can’t explain themselves that are tricky. Not everyone has the introspective skill, or verbal fluency, to explain their reasoning. I’m not really sure what to do in those cases other than asking the person I’m psychoanalyzing if that’s what’s happening.
I fully disagree that “bad faith” is a useless distinction, though you’re right that it’s not the only reason for disagreement or miscommunication.
Your examples show that “bad faith” comprises BOTH malign intent AND deception in the intent of a communication or negotiation. It’s saying one thing (“I surrender” or “I’ll negotiate these terms”) while maliciously meaning another (“I’m going to keep fighting” or “I am just wasting your time”). It’s more specific than EITHER “deception” or “antagonism”, because it’s a mix of the two.
Honest conflicts can be resolved (or at least explored and then fought), and non-malign confusion or miscommunication can be identified and recovered from or isolated to cooperation-without-agreement or an actual conflict. But “bad faith” combines these in a way that makes identification difficult, presumably because at least one party isn’t interested in identifying and resolving the crux.
Good post, I largely agree with your point. This part in particular is relevant:
I get accused of bad faith regularly (whether the accusation is earnest and made in “good faith” is another question) and I agree completely that a naked denial doesn’t accomplish anything. Like you, I usually can see what the accusation is based on and so what I do is acknowledge that the suspicion is reasonable (it often is!) and then explain why it’s wrong. If I can’t see what it’s based on, then I ask something along “what could convince you otherwise?” Sometimes there’s nothing that could dislodge the truck stuck in the mud, and it’s good to know that.
I find that this is a useful approach in everyday personal disagreements too because a lot of them are spawned out of suspicions. If someone shirks on a household chore, maybe it’s because they genuinely forgot OR MAYBE it’s because they are driven by animus and hatred towards their roommates. If the shirking continues as part of a regular pattern, it’s perfectly reasonable for the roommates to become drawn to the latter hypothesis.
To your broader point, accusing someone of bad faith doesn’t really accomplish much, and your proposed solutions (just stick to the object level or, in the alternative and if the circumstances warrant it, full-contact psychoanalysis) seem perfectly appropriate.
I liked the length, readability, and importance; happy to spend my reading budget on this.
Here are some thoughts I had:
You said, “the belief that persistent good faith disagreements are common would seem to be in bad faith!” and this tripped my alarm for gratuitous meta-leveling. Is that point essential to your thesis? Unless I read too quickly, it seems like you gave a bunch of reasons why that belief is wrong, then pointed out that it would seem to be in bad faith, but then didn’t really flesh out what the agenda/angle was. Was that intentional? Am I just stumbling over a joke I don’t understand?
I would be interested to read a whole post about how full-contact psychoanalysis can go well or poorly. I’ve seen it go well, but usually in ways that are noticeably bounded, so I think I’ll challenge the word “full” here. You meant this as an idealization/limiting case, right?
I feel like there is an implicit call to action here, which may not be right for everyone. I anticipate early adopters of Assuming Bad Faith to pay noticeable extra costs, and possibly also late adopters. I don’t have anything in particular in mind, just Chesterton’s Fence and hidden order type heuristics, plus some experience seeing intentional norm-setting go awry.
Agreed. Much better than Zack’s two recent posts of >20k words. Though I would give extra points if the essay was divided into subheadings.
Hm. Interesting topic.
It sounds like in this post, you’re mostly taking a sort of God’s Eye perspective on the problem. Like, if you could look down and set group norms with respect to bad faith, what norms would you set?
This is a different question from the question of what to do in everyday life. I think in everyday life, accusing people of acting in bad faith is almost always going to be a bad idea. For starters, I think most people believe “bad faith” means “with ill intent” (I did before reading this post). So you’d have to start by clarifying that you are talking about something different. But still, after doing so, I think the result is likely to be a) the conversation devolves and b) you stir up animosity with the other person. Something more How To Win Friends And Influence People style seems wiser.
On the other hand, in talking to people whom you’re very close to, I could see it being fruitful sometimes. But even then, I suspect it’d be best to not bring it up in the midst of the current conversation, and instead bring it up eg. the next day as a separate conversation. Like, “Hey, remember that conversation we had yesterday? I was a little unhappy with something. Mind if we talk about it?”
As for the God’s Eye perspective, one thing is that I think it depends on the group. For example, I don’t think “psychoanalyze as you wish” would work very well in a group of middle school aged girls. For a group of experienced rationalists, I’d be more optimistic.
For groups like rationalists where it’s more plausible to work, I’m having trouble thinking about guidelines for when it is and when it isn’t ok to psychoanalyze. Maybe “use your judgement” would be fine?
I do really like the idea though of it being a norm eg. in rationalist circles to not get touchy about bad faith accusations. But in other circles I could see touchiness being a useful norm, although I’m not sure. Chesterton’s Fence and Memetic Immune Systems seem like things to keep in mind there.
There are also different priors. While in general you might very well be right (or at least this post makes a lot of sense to me), I often have conversations where I’m pretty sure both my interlocutor and I am discussing things in good faith, but where we still can’t agree on pretty basic things (usually about religion).
What’s missing in this discussion is why one is talking to the “bad faith” actor in the first place.
If you’re trying to get some information and the “bad faith” actor is trying to deceive you, you walk away. That is, unless you’re sure that you’re much smarter or have some other information advantage that allows you to get new useful information regardless. The latter case is extremely rare.
If you’re trying to convince the “bad faith” actor, you either walk away or transform the discussion into a negotiation (it arguably was a negotiation in the first place). The post is relevant for this case. In such situations, people often pretend to be having an object level discussion although all parties know it’s a negotiation. This is interesting.
Even more interesting, Politics: you’re trying to convince an amateur audience that you’re right and someone else is wrong. The other party will almost always act “in bad faith” because otherwise the discussion would be taking place without an audience. You can walk away while accusing the other party of bad faith but the audience can’t really tell if you were “just about to loose the argument” or if you were arguing “less in bad faith than the other party”, perhaps because the other party is losing the argument. Crucially, given that both parties are compelled to argue in bad faith, the audience is to some extent justified in not being moved by any object level arguments since they mostly cannot check if they’re valid. They keep to the opinions they have been holding and the opinions of people they trust.
In this case, it might be worth it to move from the above situation, where the object level being discussed isn’t the real object-level issue, as in the bird example, to one where a negotiation is taking place that is transparent to the audience. However, this is only possible if there is a competent fourth party arbitrating, as the competing parties really cannot give up the advantage of “bad faith”. That’s quite rare.
An upside: If the audience is actually interested in the truth, however, and if it can overcome the tribal drive to flock to “their side”, they can maybe force the arguing parties to focus on the real issue and make object-level arguments in such a way that the audience can become competent enough to judge the arguments.Doing this is a huge investment of time and resources. It may be helped by all parties acknowledging the “bad faith” aspect of the situation and enforcing social norms that address it. This is what “debate culture” is supposed to do but as far as I know never really has.
My takeaway: don’t be too proud of your debate culture where everyone is “arguing in good faith”, if it’s just about learning about the word. This is great, of course, but doesn’t really solve the important problems.
Instead, try to come up with a debate culture (debate systems?) that can actually transform a besides-the-point bad-faith apparent disagreement into a negotiation where the parties involved can afford to make their true positions explicitly known. This is very hard but we shouldn’t give up. For example, some of the software used to modernize democracy in Taiwan seems like an interesting direction to explore.
Can you say more about this?
Besides thinking it fascinating and perhaps groundbreaking, I don’t really have original insights to offer. The most interesting democracies on the planet in my opinion are Switzerland and Taiwan. Switzerland shows what a long and sustained cultural development can do. Taiwan shows the potential for reform from within and innovation.
There’s a lot of material to read, in particular the events after the sunflower movement in Taiwan. Keeping links within lesswrong: https://www.lesswrong.com/posts/5jW3hzvX5Q5X4ZXyd/link-digital-democracy-is-within-reach and https://www.lesswrong.com/posts/x6hpkYyzMG6Bf8T3W/swiss-political-system-more-than-you-ever-wanted-to-know-i
Do you have something to share about Taiwan if you don’t try to keep the links within LW? (Oh, I now noticed the first link is actually a link to podcast. But still, if you have something more to share I’d be interested. It’s the second time I saw Taiwan’s digital democratic tools mentioned on LW recently)
Isn’t this self-defeating?
Take for example the comments here, if all of them are assumed to have been made not in ‘good faith’, why would you ever substantially engage with them?
And vice versa, if they all start not assuming ‘good faith’ with future posts by ‘Zack_M_Davis’, doesn’t that imply a negation of any built up credibility?
And if so, why would they care at all about what is written by this account?
Sorry if the title was confusing. (It was too punchy to resist.) I think if you read the full text of the post and pretend it was titled something else, it will make more sense: I’m appealing to the definition of “bad faith” as being about non-overt motives, and explicitly denying that this precludes value in reading or engaging, precisely because non-overt motives are pretty ordinary.
Ah, so the title was in bad faith. Nicely recursive!
Oh that’s fun, Wikipedia caused me to believe for so many years that “bad faith” means something different from what it means and I’m only learning that now.
If someone really believes it, then I don’t think they’re operating in “bad faith”. If the hidden motive is hidden to the speaker, that hiding doesn’t come with intent.
It definitely matters. It completely changes how you should be trying to convince that person or behave around them.
It’s different to believe a dumb argument than to intentionally lie, and honestly, humans are pretty social and honest. We mostly operate in good faith.