Bad Problems Don’t Stop Being Bad Because Somebody’s Wrong About Fault Analysis
Here’s a dynamic I’ve seen at least a dozen times:
Alice: Man that article has a very inaccurate/misleading/horrifying headline.
Bob: Did you know, *actually* article writers don’t write their own headlines?
…
But what I care about is the misleading headline, not your org chart.
Another example I’ve encountered recently is (anonymizing) when a friend complained about a prosaic safety problem at a major AI company that went unfixed for multiple months. Someone else with background information “usefully” chimed in with a long explanation of organizational restrictions and why the team responsible for fixing the problem had limitations on resources like senior employees and compute, and actually not fixing the problem was the correct priority for them etc etc etc.
But what I (and my friend) cared about was the prosaic safety problem not being fixed! And what this says about the company’s ability to proactively respond to and fix future problems. We’re complaining about your company overall. Your internal team management was never a serious concern for us to begin with!
Kelsey Piper wrote about the (horrifying) case where Hantavirus carriers in the recent outbreak on a cruise ship were released and sent back to their home countries on (often) public airplanes. No systematic quarantine seemed in place, and only some of the exposed people were even instructed to self-quarantine.
Now in light of new information we think it’s very unlikely that this’d end up being a pandemic (the virus isn’t contagious enough at human-to-human transmission). But sure seems like pure luck rather than careful risk-benefit analysis; we only learned about the low contagiousness from negative tests after the cruise ship passengers were sent home.
Seems pretty incompetent for humanity to manage a potential future pandemic this way!
Tweeters disagreed. They argued that everything’s fine because in fact the WHO as an advisory body can’t enforce legal quarantines on sovereign states.
Huh? Why is that relevant here? If this hantavirus outbreak was in fact as contagious as COVID (while maintaining the ~30% fatality rate common for past infections), Nature’s not going to be like “oops my bad. I was planning to kill 2 billion of you but I misunderstood your world’s by-laws for which entities are responsible for enforcing quarantines. I’ll just let y’all have a pass on this otherwise fatal pandemic and take my business elsewhere until you sort it out.”
In each of these examples, people’s reactions were something like explanation-as-exoneration: treating the descriptive fact of why something happened as if it answered the normative question of whether it should have.
This is a cognitive mistake or logical fallacy that is so wrong I’m not even sure how to address it. Like in the examples above, people weren’t even originally blaming the group that someone else rushed to defend! But even granting that they were, how does shifting the blame address the underlying problem?
The reasoning has to be something like “these people are (implicitly) blaming some group G for an alleged problem P. If I can demonstrate that these people are wrong to blame group G, then I’ve demonstrated that they’re Wrong. As Wrongness is a transitive property, therefore we can be sure that problem P isn’t real (???) and we no longer need to be worried about P”
Maybe I’m strawmanning, but I really don’t understand the logic here!
In some of those cases, like the second example about prosaic AI safety, clearly there’s a specific party feeling accused and defensive. So self-serving bias is at play. But most of the times I’ve encountered this fallacy in the wild it’s from seemingly disinterested third parties! So I really don’t know what makes people react in this way.
Now of course sometimes it does make sense to point out a different person or institution is at fault. For example, if Alice saw a bad headline and wrongly blames Carol, the innocent columnist, and plans to angrily email Carol about it, you can gently point out it’s not Carol’s fault but her editor Eddie. Alice can angrily email Eddie instead, problem solved! [1]
However, often these explanations are delivered in a way that doesn’t suggest a different person to blame, or that you’re wrong for wanting a solution to begin with, somehow?
Another good adjacent reason stems from “ought implies can.” If it turns out a problem somebody complains about is impossible to solve (or practically infeasible, or too expensive, etc), it (sometimes) helps to inform them of this so they can set realistic expectations and/or complain about more tractable problems.
This is both true for physical impossibilities and answers of the form “if everybody would just.”
But saying that one person or institution that you might think is at fault is not at fault isn’t exactly a proof that solving a problem is impossible! I don’t really see how it’s even evidence, most of the time.
Overall I’m pretty confused by this pattern of thinking. On the other hand we might have discovered a novel fallacy, so that’s fun!
- ^
Though in my experience if you email the writer about a bad headline usually they can get it resolved anyway.
A sort of opposite conversational thing I see which I hate is something like:
Bob : I am really sorry, I made a big mistake and (something has gone wrong).
Alice : How did this go wrong?
Bob : We had that event that day and we were all distracted talking to the visitors and eating the cake. I wasn’t used to the new UI yet, and it looks like I must have muddled some of the inputs.
Alice : Well, that is no excuse.
Bob : I never said it was. You asked for an explanation. I owned this mistake in my first line of this dialogue.
The issue here is that Bob is trying to provide a real explanation, of the kind that would be useful in providing a plan to avoid similar issues in the future. But, Alice is instead trying to give him a dresssing down for his mistakes. Possibly both conversations should happen, but not at the same time and both parties should know which one they are in.
(Although I still think asking for an explanation, and then complaining about excuses is a jerk play that is surprisingly common.)
A: “This is a bad problem! We should solve this problem by giving Group X more power to fix it!”
B: “Actually, it sure looks like this problem is plausibly caused by Group X, and certainly they’re exercising all the power they currently have to make it worse. I’m not sure what you hope to accomplish by giving them more power.”
A: “Bad problems don’t stop being bad just because someone is bad at fault analysis!”
B: “No, they don’t, but listening to your solutions to those bad problems can stop being a good idea because you’re bad at fault analysis.”
In the specific examples above, note that the As in question weren’t even primarily interested in fault analysis at the level of granularity that the Bs wanted to drag the problem into. The Bs also distract you from the problem being real.
“This problem is real” is not itself a useful insight. It is useful to the extent that it might lead to the problem being fixed. And fixing problems is much, much, much harder than identifying them.
Perhaps you have noticed that Bay Area rents are really high. This is, indeed, a problem. But noticing that this is a problem is not a serious contribution to the conversation. If you have successfully identified a real problem, and then proposed a solution that will make it worse, you are not helping on net.
This goes double if someone points out that your solution will make it worse, and you say that you “aren’t even primarily interested in fault analysis at [that] level of granularity” and that them pointing that out will “distract you from the problem being real.”
That’s simply false, except if it’s already widely known anyway.
Often yes, sometimes no. But even if fixing it is harder, recognizing the problem is a necessary condition for fix. So if fixing a problem is important, identifying the problem is *at least as important.*
Also a lot of the time the general shape of the problem is known but not specific instantiations, and the instantiations are decision-relevant. Eg it’s widely known that scientific research often doesn’t replicate. But it’s still useful to learn if specific load-bearing papers don’t replicate, and/or if some fields are more prone to research fraud than others.
Similarly, we know at a very high-level of abstraction that often news articles have misleading headlines. But it’s still relevant to know how misleading which headlines are in which ways, and also if an otherwise-respected publication does this much more than normal, this may be decision-relevant for subscription decisions, whether you as an amateur freelance writer want to pitch there, etc.
And “everybody already knows” that companies sometimes sacrifice safety for profits or other priorities. But the details matter!
Assume for the sake of the argument that everybody knows that Bay Area rents are too high. Are you really saying that three examples I cited above are exactly the same? That “everybody” knows when specific headlines are misleading, that it’s a problem when you don’t quarantine potential carriers at a beginning of an outbreak, and that AI companies should fix safety problems in a timely manner?
Moreover, are you really saying that this generalizes further such that you can never identify a problem without proposing an end-to-end solution? Eg investigative journalism on corruption needs to propose incentive-compatible ways to reduce corruption, scientists identifying issues need to identify solutions in the same paper, AI risk isn’t a problem until people come up with end-to-end solutions, etc, etc?
Again in the examples above solutions at that level of granularity weren’t yet proposed!
I think you keep responding to a different essay than the one I wrote; your hypothetical and your wider claim about problem-identification both fail to engage with my actual examples.
Not all of them, but at least one. Let’s take the quarantine one first because that’s the one where I think you do well, and then look at the headlines one next. (The AI one doesn’t have enough detail for me to be clear on what’s going on there).
--------------------------------------------------------
I think you’re pretty much on the money for the quarantine one, because:
You are pointing to a relatively-large problem that many people are genuinely unaware of. (Before reading your post I had...uh...some vague idea that some people on a cruise ship had some virus).
You aren’t proposing any working solution—but you’re clear that you don’t have a proposed solution, and also aren’t proposing anything actively harmful.
I do think you may be missing some important points in your quarantine commentary. Like, yes. There are some potential major beneficial effects of these potential carriers being quarantined. By. Uh. Some international organization that can unilaterally detain citizens of multiple different sovereign nations overseas without trial and prevent them from returning home. That seems to me...potentially fraught.
But (as you say), you aren’t proposing solutions at that level of granularity, and you’re being up-front about that. You’re pointing to a situation that looks bad and saying “is there anything we can do about this?” Maybe, on reflection, the answer to this will end up being “it’s going to be a very big diplomatic effort to get even small steps on this problem, and it doesn’t seem likely to succeed.” But that’s not your fault, and everything you have said on this is reasonable.
--------------------------------------------------------
On the other hand, the headlines point I think is a much less encouraging one for your view.
Lots of headlines are misleading. What can we do about this? Should we go harass journalists about it? Or perhaps we should go harass editors about it? Those are both proposed solutions at a fair level of granularity. You...seem to me to rather be suggesting both of them. And both of them are (at least in my opinion) bad ideas.
The reason why headlines are misleading is because these businesses need to attract readers in order to continue to exist, and readers demand exaggerated headlines. Talking about which particular employees of a newspaper you should personally harass in order to improve the accuracy of the headlines is like talking about which particular employees of Walmart you should personally harass in order to make them stop selling Twinkies and start selling healthier food.
Here, I think that:
You are pointing to a relatively-small problem that everyone reading your post is already aware of.
You are proposing a solution that seems to me to be actively bad.
and so I think that your overall net score here does not come out positive.
--------------------------------------------------------
Not quite. I’m saying that the value of your contribution has two elements:
Pointing to a problem. You gain points here by pointing out real, large problems that your readers are not already familiar with.
Suggesting a solution. You gain points here by suggesting reasonable solutions, and lose points here by suggesting bad solutions.
If Alice points out a substantial new problem, without particularly proposing a detailed solution beyond ‘can we do something about this’, and people are responding by nitpicking issues like that, I agree that’s bad.
But I think it’s a vastly more common situation for Alice to point out a problem that everyone is already aware of and propose an actively bad solution. And I think that the correct response to that looks almost identical to the nitpicking that annoys you.
My read of such cases is that the implicit reasoning seems more like the first part only, ie, “these people are (implicitly) blaming some group G for an alleged problem P. I want to demonstrate that these people are wrong to blame group G”
where the reason for them wanting to demonstrate that you are wrong is something like, they know an interesting fact that demonstrates this, which they think you ought to know, or maybe just want the social status boost from showing you/the world they know said interesting fact
What makes you feel that there is usually an implicit “and therefore we no longer need to be worried about P”? I would assume that in the majority of cases if you followed up their statement by asking “Does this mean that you think P is fine actually?”, they wouldn’t say “yes”, they would say “no, I’m just saying that P isn’t G’s fault”
(edit: looking specifically at the linked WHO-related tweet, something a little different seems to be going on here, where the implicit reasoning seems like “these people are (implicitly) proposing we ought to do X to fix an alleged problem P, but P isn’t actually a problem and X would be really bad, so I’m going to push back”—which seems different. not defending the guy’s (poor) logic on the specifics here, just talking about the form of the argument)
The original assumption that some group G is being implicitly blamed may seem weird, but at least on social media, it seems like a lot of complaining about things happens with the primary goal of blaming someone, so I think it’s not crazy that some people would assume this.
I agree though a) often “blaming someone” (or at least a group/institution) is on the route to trying to get if fixed, especially for quite narrow asks like “your headline sucks” and b) if your problem is with slacktivism writ large I don’t think telling people they misidentified the subgroup within an institution that gets something wrong is likely to be persuasive that people should stop doing slacktivism.
Concretely, if I complain about a headline in the New York Times and tag them on Twitter, the predictable effects include a) decreasing the NYT’s status a little among my readers, b) if enough people raise a fuss/the right people see the complaints, the specific headline might get fixed, and c) optimistically, if even more people raise a fuss and the right people see such complaints with enough regularity, the NYT might change their headlines policy going forwards.
Maybe you think this is useless. But a) I think you’re empirically wrong about headlines at least sometimes, and b) I don’t think “the writers aren’t responsible for headlines” is a satisfying explanation for why it’s useless!
Similarly, writing about why a pandemic action is bad is one of the ways a journalist/quasi-public figure like Kelsey has to effect change. Maybe it’s worse than (eg) a white paper with step-by-step analysis of how each specific institution, and each team within an organization, could act differently, but I’m not sure that’s true[1]. Besides, the latter is a much higher bar!
From the outside, it’s easier to see how something is a problem than the specific steps necessary to fix it!
though on reflection, I do think the second example feels something like “to fix P, we would have to do something different, therefore P is intractable”, which does seem to me like a more common fallacy and probably most appealing to people who feel that P isn’t a big deal anyway.
the third example also has a similar vibe (“to fix P, we would have to do something different (specifically, X, which would be bad) (and P isn’t a big deal anyway)”).
In defense there, that’s often taken as implicit and or likely. It’s a common mistake to think that authors write their own headlines and for people with complaints about headlines to blame the author for it. Chiming in with “Authors don’t make their own headlines” doesn’t fix the issue of bad headlines, but it does correct the (likely) mistake that Alice has in her head about who or what to blame.
Although in that case I suppose you could get really annoying and go “erm actually, it’s the financial incentives that really determines how headlines turn out because good headline writers fail and stop writing headlines”. And understanding that allows us to look at the actual questions.
Is there an actual issue now that we understand the real cause.
Is there an actual fix?
If it’s “authors keep making bad headlines” that seems more fixable than “financial incentives reward bad headlines and good headlines literally die off”.
Yeah in Bob’s defense I’ve done the same in the past (gently informed someone that the writer wasn’t responsible for the headlines under the illusion that I was helping). Though overall I’m not sure this story checks out.
I think there are incentives pushing both towards misleading headlines and non-misleading (or less misleading) headlines. Incentives for the latter include people cancelling their subscriptions, people getting bit by misleading headlines one too many times and stop paying attention to certain sources known for the misleading headlines, journalists or other staff quitting because they’re embarrassed to be associated with the terrible headlines, internal complaints that don’t rise to quitting, etc. Part of the value of complaining about misleading headlines is to (slightly) increase the cost of said misleading headlines[1].
Each individual complaint doesn’t do much, but people complaining about the headlines aren’t independent of the incentives the companies face.
I don’t think the headline equilibrium is anywhere near maximum slop (you can imagine a result where each headline is RL’d to maximize click-through rates and is completely independent of the article in question), so I want to caution against a sort of weak nihilism that looks something like “What Can I Do, It’s Just the Incentives.”
I’m not sure about the empirical picture here, but my general impression is that at least in the English-speaking ~educated internet, headlines in the last ten years have on average gotten better rather than worse at not being misleading.
Though if the business model is truly clickbait, there is of course some backfire risk.
I’ve found myself giving explanations (not as exonerations) when I suspect the other person is looking for a solution to their problem but does not know the levers to pull.
Yeah I agree there are times where unprompted fault-analysis is useful! YMMV etc.
When I see this dynamic, it seems to typically be that Alice is implicitly saying that the thing is easy to fix and that there is some deep incompetence from the thing being broken. When I am Bob I’m trying to explain why the thing being fixed is actually really hard and it’s downstream with a bunch of complexity Alice probably wasn’t thinking about. I think this is actually extremely relevant data when the difficulty of fixing the thing is important!
If Alice just wants emotional support and to complain about things being sad, without any orientation towards fixing it or the difficulty of the problem, then I agree with this post.
I’m most sympathetic to this when it looks like the explanation is at the level of analysis which explains that fixing the problem is impossible, or practically impossible, or the provided cost-benefits analysis is sufficient to demonstrate why fixing it as unrealistic. The best common example is when it’s an economic issue, when one interlocutor is more economically literate than the other, and both are talking in approximately good faith.
Alice: I can’t believe the price of groceries are up again! I bet it’s because of the greedy CEOs in charge of Berkley Bowl!
Bob: That sucks. But I think it’s implausible that changes in the price of groceries are substantially tied to the greed of CEOs, because we should expect CEO greed to be a constant. You can’t use a constant to explain a sudden change!
Alice: Oh I get it now. But do you have an actual explanation for why the prices are up?
Bob A: No, sorry. But in my experience price changes aren’t something we can do much about, and aren’t something that’s responsive to pushback.
Bob B: Yes, I think it’s because of [specific explanation involving the Iran war and fuel prices]
__
In contrast, I’m less sympathetic to these explanations when, if you follow the logic, it doesn’t actually say someone isn’t locally responsible. Or when “this is hard” explanations only say why it’s hard for a specific mechanism rather than a general class of solutions.
I’m also more sympathetic to this when Bob’s actually right, though of course that’s challenging as a norm.
There’s a related concept of blame laundering: by making our party dependent on a third party for reliability, we can then wash our hands clean of any blame when things go wrong.
If you host your website on-prem then it’s always your fault that websites down and your customers can’t track the foobars. But if you host on AWS, and AWS goes down, well then obviously it’s not your fault—it’s AWS’s fault!
Of course this is nonsense. Your customers contracted with you, not AWS. They don’t care about your infra, all they care about is tracking foobars, and if it’s down, it’s your fault, whether it’s down because of a fire in us-east-1 or you tripped on the power cable for your workstation. You contracted with AWS, and can and should complain to them, but that doesn’t in any way exonerate you for your duty towards your customers.
Cease, Linch, to chide the follies of mankind,
Whose erring will, though seeming free, was wrought
In that first Forge whence reason, weak and warped,
Issued half-finished. Blame not Adam’s sons —
Arraign the Hand that shaped them so!
I think what’s often going on here is that saying “Actually, X happened because of Y” is a kind of status attack, trying to say that unlike the replier, the OP doesn’t even know about Y, so they hardly know what they are talking about. So it’s deployed by people who are trying to discredit the OP and/or their argument, or lift themselves and/or their own argument above them.
Thinking more, I think it’s also often generated out of cynicism, when people have basically given up on the idea that a problem could be solved, and they are explaining to you some premise they have about why that is, but they don’t really want to explain the whole thing in one big wall of text.
Yeah, I think the generous way to interpret the objector in this dialogue is as saying “I think you are underestimating how hard this problem is to fix”.
It’s not just that B is responsible instead of A, but that B has different constraints than A. Maybe you don’t see any constraints on A that would make the problem hard to solve. Yes—that is because the problem’s constraints are on B instead!
I think “the fact that you’re even complaining about X generically rather than explicitly X modulo B’s constraints this means you’ve thought about it less than I have” is an unfortunately common condescension.
I think that sometimes, a problem is the optimal solution. A lot of rules are trade-offs between things, and it doesn’t help anyone to complain about the negatives of the trade-off which was made, as “fixing” it merely pushes the problem elsewhere. (Or alternatively, it requires a higher granularity of laws and regulations)
I could complain about criminals getting away with their crimes, but fixing that would require lowering the threshold of evidence, which would mean that more innocent people were unfairly punished. And if this ‘solution’ is implemented, then we’d hear complains about that.
Better solutions exist, but they require context-dependent judgement. A lot of the laws and regulations we follow are too general. In most contexts, it’s probably good that the WHO can’t enforce quarantines on sovereign states, and this just happened to be an exception.
I don’t know what to call this problem, bias variance tradeoff maybe?
You’re far too willing to concede to their reformulation of the problem!
You can imagine a version of the world where literally we exhausted every other avenue (or every other avenue that isn’t even more tyrannical) and the only option left is either what actually happened or give the WHO unchecked authority to enforce quarantines over sovereign states. But there are a number of actions that are improvements without needing to give WHO more authority in that way:
The WHO can advise sovereign states to enforce quarantines.
This is already what public health departments usually do, including within a country. They usually advise local governments rather than have operational or legislative control of the state use of force
They can offer to broker multilateral agreements so that well-resourced states can have quarantine ships on site, and offer to take in everyone.
Instead of looking at the WHO, we can ask why individual states are not doing quarantines
The WHO or another third party can offer individuals money to self-quarantine
We can set up an international apparatus (can be an international body, nonprofit, or for-profit company) to manage these quarantines
etc, etc
In general, there’s massive slippage between “this specific solution that you never proposed has sufficiently, undesirable consequences, therefore the best realistic world is the status quo”
This is something I’ve encountered often with this fallacy in particular. Otherwise rational, well-meaning, disinterested, etc, people often miss that Bob’s making a complete reformulation of a problem from complaining about a bigger issue to a narrower fault analysis reframing. For example, in an earlier draft of this post, where I said
Claude’s review thought the AI safety example was kinda reasonable prioritization of the team (“You can still rebut with “rational prioritization that produces this outcome is itself the problem,” but the response isn’t a non sequitur the way the headline or hantavirus replies are.”). But they ignored/missed that I never brought up the specific team. Without privileged information, I don’t know the underlying dynamic here. It could be that the team should have more headcount or compute. It could be that a different team should handle it. It could be a different answer altogether. The complaint is about the company not the team[1].
But it’s very easy/tempting for the mistaken levels of the fault analysis framing to suck up oxygen.
Now if the defense was that the company made the right choice (eg “in an ideal world we’d want to go carefully and address all safety issues like that. But due to competitive pressures, and also because this specific prosaic safety issue can’t realistically pose an x-risk, our company is being very reasonable in not devoting more resources to solving this problem over other safety issues and racing”), that’d be a much more valid defense. I still think it’s wrong, but at least it’d be valid!
One of the systemic issues is that we do not innately understand the ontology of prevention.
There is no benchmark for “infections prevented” like there is no benchmark for “buildings uncollapsed by earthquakes”, “windows unbroken by hurricanes”, or with AI “inaccurate claims unconfabulated”.
Our neurological architecture struggles to model second-order effects, and the trust+time+attention+energy+memory it requires to listen, evaluate, and act on someone else’s ideas about the future is usually a cost we do not pay when we cannot see the idea for ourselves.
Perhaps this could be considered a specific case of ignoratio elenchi?
I think this is technically true but iiuc pretty much every informal fallacy is a species of ignoratio elenchi in Aristotle’s broad sense; it’s a bit like spotting a new furry animal and correctly identifying it as a type of vertebrate.
I’ll also throw in AccountabilitySinks as prior art.
I agree it’s related!
Out-of-band theory: this is chat assistant driven societal change.
Insofar as this is a recent development, it could be explained by people often reaching for models to Explain Things—models which are prone to explaining away issues systemically/by-third-angle, as for various legal reasons they cannot outright blame specific humans/issues often—gradually adjusting human behaviors to match.
I’m pretty sure i’ve encountered the first example pre-ChatGPT (it’s not like an insanely common pattern but I see it every couple of months). The other two examples are definitely post-LLM chatbot popularity though I’m not sure about other examples.
My memory isn’t amazing so I can’t really say how regular it was on the internet between say 2012 and 2022. Intuitively I’d be quite surprised if there’s a delve-level jump between before and after LLM chatbots. But I dunno if my pattern recognition skills are good enough to detect say a 2x jump.
n.b. models clearly only do this because humans make them do it because 21st century communication norms are like this because internet dynamics have essentially flipped the equation of the value of fault assignment such that exposure to the knowledge required to blame almost always never equates to leverage over the causes which makes it counterproductive to fault anything and more locally effective to try to explain away everything and reduce your attention to the matter but this is really really bad for anything ever actually getting better