Maybe I’m just misunderstanding the structure of the essay, but I’m a bit confused by the second half of this essay — you begin to argue that there are benefits to designing accountability sinks correctly, but it seems like most of your subsequent examples to support this involve someone disobeying the formal process and taking responsibility!
The ER doctor skips process, turns over triage, and takes responsibility. His actions are defended by people using out-of system reasoning.
The ATC skips process, comes back, and takes responsibility. Her actions are defended by people using out-of system reasoning.
etc for Healthcare.gov, Boris Johnson, etc. They were operating in the context of accountability sinks which discouraged the thing they ultimately and rightly chose to do, within the system they would have been forgiven for just following the rules.
Likewise, the free market example given feels like the total opposite of an accountability sink! The person who has the problem is in fact the person who can solve it. The free market does have a classic accountability sink, in the form of externalities, but how it’s framed here seems like a perfect everyday example of the buck stopping exactly where it should stop.
The second part begins with: “Second, limiting the accountability if often exactly the thing you want.” Maybe I should have elaborated on that, but example is often worth 1000 words...
I did follow that turn, I just am confused by the examples you chose to illustrate it with. The first examples of Bell Labs and VC firms I agree match the claim, but not the subsequent ones.
I am imagining an accountability sink as a situation where the person held responsible has no power over the outcome, shielding a third party. So this is bad as in the airline example (Attendant held responsible by disgruntled passenger, although mostly powerless, this shields corporate structure, problem not resolved), and good as in the VC example (VC firm held responsible by investors for profits, although mostly powerless, this shields startup founders to take risks, problem resolved successfully).
And if this is the frame you’re using, then I don’t see how the ER doctor and ATC controller examples fit this mold?
Does the Bell Labs example match the claim, though…? My reaction upon reading that one was the same as yours on reading the other anti-examples you listed. OP writes:
The same pattern emerges when looking at successful research institutions such as Xerox PARC, Bell Labs, or DARPA. Time and again, you find a crucial figure in the background: A manager who deliberately shielded researchers from demands for immediate utility, from bureaucratic oversight, and from the constant need to justify their work to higher-ups.
So… there was one specific person—that “crucial figure”—who was accountable to the higher-ups. He “shielded” the researchers by taking all of the accountability on himself! That’s the very opposite of the “accountability sink” pattern, it seems to me…
I read this more like a textbook article and less like a persuasive essay (which are epistemologically harmful imo) so the goal may have been to provide diverse examples, rather than examples which lead you to a predetermined conclusion.
Maybe I’m just misunderstanding the structure of the essay, but I’m a bit confused by the second half of this essay — you begin to argue that there are benefits to designing accountability sinks correctly, but it seems like most of your subsequent examples to support this involve someone disobeying the formal process and taking responsibility!
The ER doctor skips process, turns over triage, and takes responsibility. His actions are defended by people using out-of system reasoning.
The ATC skips process, comes back, and takes responsibility. Her actions are defended by people using out-of system reasoning.
etc for Healthcare.gov, Boris Johnson, etc. They were operating in the context of accountability sinks which discouraged the thing they ultimately and rightly chose to do, within the system they would have been forgiven for just following the rules.
Likewise, the free market example given feels like the total opposite of an accountability sink! The person who has the problem is in fact the person who can solve it. The free market does have a classic accountability sink, in the form of externalities, but how it’s framed here seems like a perfect everyday example of the buck stopping exactly where it should stop.
The second part begins with: “Second, limiting the accountability if often exactly the thing you want.” Maybe I should have elaborated on that, but example is often worth 1000 words...
I did follow that turn, I just am confused by the examples you chose to illustrate it with. The first examples of Bell Labs and VC firms I agree match the claim, but not the subsequent ones.
I am imagining an accountability sink as a situation where the person held responsible has no power over the outcome, shielding a third party. So this is bad as in the airline example (Attendant held responsible by disgruntled passenger, although mostly powerless, this shields corporate structure, problem not resolved), and good as in the VC example (VC firm held responsible by investors for profits, although mostly powerless, this shields startup founders to take risks, problem resolved successfully).
And if this is the frame you’re using, then I don’t see how the ER doctor and ATC controller examples fit this mold?
Does the Bell Labs example match the claim, though…? My reaction upon reading that one was the same as yours on reading the other anti-examples you listed. OP writes:
So… there was one specific person—that “crucial figure”—who was accountable to the higher-ups. He “shielded” the researchers by taking all of the accountability on himself! That’s the very opposite of the “accountability sink” pattern, it seems to me…
I read this more like a textbook article and less like a persuasive essay (which are epistemologically harmful imo) so the goal may have been to provide diverse examples, rather than examples which lead you to a predetermined conclusion.