Bad Problems Don’t Stop Being Bad Because Somebody’s Wrong About Fault Analysis
Here’s a dynamic I’ve seen at least a dozen times:
Alice: Man that article has a very inaccurate/misleading/horrifying headline.
Bob: Did you know, *actually* article writers don’t write their own headlines?
…
But what I care about is the misleading headline, not your org chart.
Another example I’ve encountered recently is (anonymizing) when a friend complained about a prosaic safety problem at a major AI company that went unfixed for multiple months. Someone else with background information “usefully” chimed in with a long explanation of organizational restrictions and why the team responsible for fixing the problem had limitations on resources like senior employees and compute, and actually not fixing the problem was the correct priority for them etc etc etc.
But what I (and my friend) cared about was the prosaic safety problem not being fixed! And what this says about the company’s ability to proactively respond to and fix future problems. We’re complaining about your company overall. Your internal team management was never a serious concern for us to begin with!
Kelsey Piper wrote about the (horrifying) case where Hantavirus carriers in the recent outbreak on a cruise ship were released and sent back to their home countries on (often) public airplanes. No systematic quarantine seemed in place, and only some of the exposed people were even instructed to self-quarantine.
Now in light of new information we think it’s very unlikely that this’d end up being a pandemic (the virus isn’t contagious enough at human-to-human transmission). But sure seems like pure luck rather than careful risk-benefit analysis; we only learned about the low contagiousness from negative tests after the cruise ship passengers were sent home.
Seems pretty incompetent for humanity to manage a potential future pandemic this way!
Tweeters disagreed. They argued that everything’s fine because in fact the WHO as an advisory body can’t enforce legal quarantines on sovereign states.
Huh? Why is that relevant here? If this hantavirus outbreak was in fact as contagious as COVID (while maintaining the ~30% fatality rate common for past infections), Nature’s not going to be like “oops my bad. I was planning to kill 2 billion of you but I misunderstood your world’s by-laws for which entities are responsible for enforcing quarantines. I’ll just let y’all have a pass on this otherwise fatal pandemic and take my business elsewhere until you sort it out.”
In each of these examples, people’s reactions were something like explanation-as-exoneration: treating the descriptive fact of why something happened as if it answered the normative question of whether it should have.
This is a cognitive mistake or logical fallacy that is so wrong I’m not even sure how to address it. Like in the examples above, people weren’t even originally blaming the group that someone else rushed to defend! But even granting that they were, how does shifting the blame address the underlying problem?
The reasoning has to be something like “these people are (implicitly) blaming some group G for an alleged problem P. If I can demonstrate that these people are wrong to blame group G, then I’ve demonstrated that they’re Wrong. As Wrongness is a transitive property, therefore we can be sure that problem P isn’t real (???) and we no longer need to be worried about P”
Maybe I’m strawmanning, but I really don’t understand the logic here!
In some of those cases, like the second example about prosaic AI safety, clearly there’s a specific party feeling accused and defensive. So self-serving bias is at play. But most of the times I’ve encountered this fallacy in the wild it’s from seemingly disinterested third parties! So I really don’t know what makes people react in this way.
Now of course sometimes it does make sense to point out a different person or institution is at fault. For example, if Alice saw a bad headline and wrongly blames Carol, the innocent columnist, and plans to angrily email Carol about it, you can gently point out it’s not Carol’s fault but her editor Eddie. Alice can angrily email Eddie instead, problem solved! [1]
However, often these explanations are delivered in a way that doesn’t suggest a different person to blame, or that you’re wrong for wanting a solution to begin with, somehow?
Another good adjacent reason stems from “ought implies can.” If it turns out a problem somebody complains about is impossible to solve (or practically infeasible, or too expensive, etc), it (sometimes) helps to inform them of this so they can set realistic expectations and/or complain about more tractable problems.
This is both true for physical impossibilities and answers of the form “if everybody would just.”
But saying that one person or institution that you might think is at fault is not at fault isn’t exactly a proof that solving a problem is impossible! I don’t really see how it’s even evidence, most of the time.
Overall I’m pretty confused by this pattern of thinking. On the other hand we might have discovered a novel fallacy, so that’s fun!
- ^
Though in my experience if you email the writer about a bad headline usually they can get it resolved anyway.
A sort of opposite conversational thing I see which I hate is something like:
Bob : I am really sorry, I made a big mistake and (something has gone wrong).
Alice : How did this go wrong?
Bob : We had that event that day and we were all distracted talking to the visitors and eating the cake. I wasn’t used to the new UI yet, and it looks like I must have muddled some of the inputs.
Alice : Well, that is no excuse.
Bob : I never said it was. You asked for an explanation. I owned this mistake in my first line of this dialogue.
The issue here is that Bob is trying to provide a real explanation, of the kind that would be useful in providing a plan to avoid similar issues in the future. But, Alice is instead trying to give him a dresssing down for his mistakes. Possibly both conversations should happen, but not at the same time and both parties should know which one they are in.
(Although I still think asking for an explanation, and then complaining about excuses is a jerk play that is surprisingly common.)
A: “This is a bad problem! We should solve this problem by giving Group X more power to fix it!”
B: “Actually, it sure looks like this problem is plausibly caused by Group X, and certainly they’re exercising all the power they currently have to make it worse. I’m not sure what you hope to accomplish by giving them more power.”
A: “Bad problems don’t stop being bad just because someone is bad at fault analysis!”
B: “No, they don’t, but listening to your solutions to those bad problems can stop being a good idea because you’re bad at fault analysis.”
In the specific examples above, note that the As in question weren’t even primarily interested in fault analysis at the level of granularity that the Bs wanted to drag the problem into. The Bs also distract you from the problem being real.
“This problem is real” is not itself a useful insight. It is useful to the extent that it might lead to the problem being fixed. And fixing problems is much, much, much harder than identifying them.
Perhaps you have noticed that Bay Area rents are really high. This is, indeed, a problem. But noticing that this is a problem is not a serious contribution to the conversation. If you have successfully identified a real problem, and then proposed a solution that will make it worse, you are not helping on net.
This goes double if someone points out that your solution will make it worse, and you say that you “aren’t even primarily interested in fault analysis at [that] level of granularity” and that them pointing that out will “distract you from the problem being real.”
My read of such cases is that the implicit reasoning seems more like the first part only, ie, “these people are (implicitly) blaming some group G for an alleged problem P. I want to demonstrate that these people are wrong to blame group G”
where the reason for them wanting to demonstrate that you are wrong is something like, they know an interesting fact that demonstrates this, which they think you ought to know, or maybe just want the social status boost from showing you/the world they know said interesting fact
What makes you feel that there is usually an implicit “and therefore we no longer need to be worried about P”? I would assume that in the majority of cases if you followed up their statement by asking “Does this mean that you think P is fine actually?”, they wouldn’t say “yes”, they would say “no, I’m just saying that P isn’t G’s fault”
(edit: looking specifically at the linked WHO-related tweet, something a little different seems to be going on here, where the implicit reasoning seems like “these people are (implicitly) proposing we ought to do X to fix an alleged problem P, but P isn’t actually a problem and X would be really bad, so I’m going to push back”—which seems different. not defending the guy’s (poor) logic on the specifics here, just talking about the form of the argument)
The original assumption that some group G is being implicitly blamed may seem weird, but at least on social media, it seems like a lot of complaining about things happens with the primary goal of blaming someone, so I think it’s not crazy that some people would assume this.
I agree though a) often “blaming someone” (or at least a group/institution) is on the route to trying to get if fixed, especially for quite narrow asks like “your headline sucks” and b) if your problem is with slacktivism writ large I don’t think telling people they misidentified the subgroup within an institution that gets something wrong is likely to be persuasive that people should stop doing slacktivism.
Concretely, if I complain about a headline in the New York Times and tag them on Twitter, the predictable effects include a) decreasing the NYT’s status a little among my readers, b) if enough people raise a fuss/the right people see the complaints, the specific headline might get fixed, and c) optimistically, if even more people raise a fuss and the right people see such complaints with enough regularity, the NYT might change their headlines policy going forwards.
Maybe you think this is useless. But a) I think you’re empirically wrong about headlines at least sometimes, and b) I don’t think “the writers aren’t responsible for headlines” is a satisfying explanation for why it’s useless!
Similarly, writing about why a pandemic action is bad is one of the ways a journalist/quasi-public figure like Kelsey has to effect change. Maybe it’s worse than (eg) a white paper with step-by-step analysis of how each specific institution, and each team within an organization, could act differently, but I’m not sure that’s true[1]. Besides, the latter is a much higher bar!
From the outside, it’s easier to see how something is a problem than the specific steps necessary to fix it!
though on reflection, I do think the second example feels something like “to fix P, we would have to do something different, therefore P is intractable”, which does seem to me like a more common fallacy and probably most appealing to people who feel that P isn’t a big deal anyway.
the third example also has a similar vibe (“to fix P, we would have to do something different (specifically, X, which would be bad) (and P isn’t a big deal anyway)”).
I’ve found myself giving explanations (not as exonerations) when I suspect the other person is looking for a solution to their problem but does not know the levers to pull.
Yeah I agree there are times where unprompted fault-analysis is useful! YMMV etc.
Cease, Linch, to chide the follies of mankind,
Whose erring will, though seeming free, was wrought
In that first Forge whence reason, weak and warped,
Issued half-finished. Blame not Adam’s sons —
Arraign the Hand that shaped them so!
I think what’s often going on here is that saying “Actually, X happened because of Y” is a kind of status attack, trying to say that unlike the replier, the OP doesn’t even know about Y, so they hardly know what they are talking about. So it’s deployed by people who are trying to discredit the OP and/or their argument, or lift themselves and/or their own argument above them.
Thinking more, I think it’s also often generated out of cynicism, when people have basically given up on the idea that a problem could be solved, and they are explaining to you some premise they have about why that is, but they don’t really want to explain the whole thing in one big wall of text.
Yeah, I think the generous way to interpret the objector in this dialogue is as saying “I think you are underestimating how hard this problem is to fix”.
It’s not just that B is responsible instead of A, but that B has different constraints than A. Maybe you don’t see any constraints on A that would make the problem hard to solve. Yes—that is because the problem’s constraints are on B instead!
I think “the fact that you’re even complaining about X generically rather than explicitly X modulo B’s constraints this means you’ve thought about it less than I have” is an unfortunately common condescension.
I think that sometimes, a problem is the optimal solution. A lot of rules are trade-offs between things, and it doesn’t help anyone to complain about the negatives of the trade-off which was made, as “fixing” it merely pushes the problem elsewhere. (Or alternatively, it requires a higher granularity of laws and regulations)
I could complain about criminals getting away with their crimes, but fixing that would require lowering the threshold of evidence, which would mean that more innocent people were unfairly punished. And if this ‘solution’ is implemented, then we’d hear complains about that.
Better solutions exist, but they require context-dependent judgement. A lot of the laws and regulations we follow are too general. In most contexts, it’s probably good that the WHO can’t enforce quarantines on sovereign states, and this just happened to be an exception.
I don’t know what to call this problem, bias variance tradeoff maybe?
You’re far too willing to concede to their reformulation of the problem!
You can imagine a version of the world where literally we exhausted every other avenue (or every other avenue that isn’t even more tyrannical) and the only option left is either what actually happened or give the WHO unchecked authority to enforce quarantines over sovereign states. But there are a number of actions that are improvements without needing to give WHO more authority in that way:
The WHO can advise sovereign states to enforce quarantines.
This is already what public health departments usually do, including within a country. They usually advise local governments rather than have operational or legislative control of the state use of force
They can offer to broker multilateral agreements so that well-resourced states can have quarantine ships on site, and offer to take in everyone.
Instead of looking at the WHO, we can ask why individual states are not doing quarantines
The WHO or another third party can offer individuals money to self-quarantine
We can set up an international apparatus (can be an international body, nonprofit, or for-profit company) to manage these quarantines
etc, etc
In general, there’s massive slippage between “this specific solution that you never proposed has sufficiently, undesirable consequences, therefore the best realistic world is the status quo”
This is something I’ve encountered often with this fallacy in particular. Otherwise rational, well-meaning, disinterested, etc, people often miss that Bob’s making a complete reformulation of a problem from complaining about a bigger issue to a narrower fault analysis reframing. For example, in an earlier draft of this post, where I said
Claude’s review thought the AI safety example was kinda reasonable prioritization of the team (“You can still rebut with “rational prioritization that produces this outcome is itself the problem,” but the response isn’t a non sequitur the way the headline or hantavirus replies are.”). But they ignored/missed that I never brought up the specific team. Without privileged information, I don’t know the underlying dynamic here. It could be that the team should have more headcount or compute. It could be that a different team should handle it. It could be a different answer altogether. The complaint is about the company not the team[1].
But it’s very easy/tempting for the mistaken levels of the fault analysis framing to suck up oxygen.
Now if the defense was that the company made the right choice (eg “in an ideal world we’d want to go carefully and address all safety issues like that. But due to competitive pressures, and also because this specific prosaic safety issue can’t realistically pose an x-risk, our company is being very reasonable in not devoting more resources to solving this problem over other safety issues and racing”), that’d be a much more valid defense. I still think it’s wrong, but at least it’d be valid!
Perhaps this could be considered a specific case of ignoratio elenchi?
I think this is technically true but iiuc pretty much every informal fallacy is a species of ignoratio elenchi in Aristotle’s broad sense; it’s a bit like spotting a new furry animal and correctly identifying it as a type of vertebrate.
I’ll also throw in AccountabilitySinks as prior art.
I agree it’s related!
Out-of-band theory: this is chat assistant driven societal change.
Insofar as this is a recent development, it could be explained by people often reaching for models to Explain Things—models which are prone to explaining away issues systemically/by-third-angle, as for various legal reasons they cannot outright blame specific humans/issues often—gradually adjusting human behaviors to match.
I’m pretty sure i’ve encountered the first example pre-ChatGPT (it’s not like an insanely common pattern but I see it every couple of months). The other two examples are definitely post-LLM chatbot popularity though I’m not sure about other examples.
My memory isn’t amazing so I can’t really say how regular it was on the internet between say 2012 and 2022. Intuitively I’d be quite surprised if there’s a delve-level jump between before and after LLM chatbots. But I dunno if my pattern recognition skills are good enough to detect say a 2x jump.
n.b. models clearly only do this because humans make them do it because 21st century communication norms are like this because internet dynamics have essentially flipped the equation of the value of fault assignment such that exposure to the knowledge required to blame almost always never equates to leverage over the causes which makes it counterproductive to fault anything and more locally effective to try to explain away everything and reduce your attention to the matter but this is really really bad for anything ever actually getting better