Beyond Blame Minimization: Thoughts from the comments

There were a lot of fantastic comments on this post. I want to break down some common themes and offer my thoughts.

Incentives

Unsurprisingly, there was a lot of focus on the role that incentives—or a lack thereof—play in a bureaucracy.

Dumbledore’s Army:

I’ve been asking myself the same question about bureaucracies, and the depressing conclusion I came up with is that bureaucracies are often so lacking incentives that their actions are either based on inertia or simply unpredictable. I’m working from a UK perspective but I think it generalises. In a typical civil service job, once hired, you get your salary. You don’t get performance pay or any particular incentive to outperform.[1] You also don’t get fired for anything less than the most egregious misconduct. (I think the US has strong enough public sector unions that the typical civil servant also can’t be fired, despite your different employment laws.) So basically the individual has no incentive to do anything.

As far as I can see, the default state is to continue half-assing your job indefinitely, putting in the minimum effort to stay employed, possibly plus some moral-maze stuff doing office politics if you want promotion. (I’m assuming promotion is not based on accomplishment of object-level metrics.) The moral maze stuff probably accounts for tendencies toward blame minimisation.

Some individuals may care altruistically about doing the bureaucracy’s mission better, eg getting medicines approved faster, but unless they are the boss of the whole organisation, they need to persuade other people to cooperate in order to achieve that. And most of the other people will be enjoying their comfortable low-effort existence and will just get annoyed at that weirdo who’s trying to make them do extra work in order to achieve a change that doesn’t benefit them. So the end result is strong inertia where the bureaucracy keeps doing whatever it was doing already.

tailcalled quotes John Wentworth, who talks about incentives in terms of degrees of freedom:

Responding to Chris: if you go look at real bureaucracies, it is not really the case that “at each level the bosses tell the subordinates what to do and they just have to do it”. At every bureaucracy I’ve worked in/​around, lower-level decision makers had many de facto degrees of freedom. You can think of this as a generalization of one of the central problems of jurisprudence: in practice, human “bosses” (or legislatures, in the jurisprudence case) are not able to give instructions which unambiguously specify what to do in all the crazy situations which come up in practice. Nor do people at the top have anywhere near the bandwidth needed to decide every ambiguous case themselves; there is far too much ambiguity in the world. So, in practice, lower-level people (i.e. judges at various levels) necessarily make many many judgement calls in the course of their work.

And Dagon on weak or missing feedback:

I think a reasonable model for it is “mission motive”—somewhat like any other motive, but with a very weak or missing feedback mechanism. Without being able to track results, and with no market discipline (failure → bankruptcy when the motive is aligned with existence), you get weird behaviors based on individual humans with unrefined models.

Other comments alluded to these ideas as well—I can’t quote everyone, sadly. But let me try to pull these ideas together and make use of them.

In economics, the most obvious way to think of an incentive is that it is a tendency for things to go one way rather than another way. If people are incentivized by money, for example, and there are two paths in front of them, and one of them has a $20 bill at the end and the other doesn’t, then we expect people to go down the path of the $20 bill more than randomness predicts. We don’t have to talk about human psychology or the profit motive or anything. Instead, if we observe that a complex system consistently tends towards certain parameters, then we can say that it is incentivized to do so. So if we see plants consistently grow in the direction of the sun, then we can say plants are incentivized by sunlight without ever claiming to know what goes on inside of a plant.

(Not that there’s anything wrong with learning about what goes on inside of a plant.)

An incentive, in this sense, is simply a tendency of a physical system, not a psychological factor per se. Therefore, we could think of an incentive in a few different ways. One is certainly in terms of feedback mechanisms. In order for a plant to be incentivized by sunlight, the plant needs a way of detecting the sunlight and repeatedly and consistently knowing where it is and tracking the relationship of its own movement with respect to the light. Similarly, to be incentivized by the $20 bill, people need to be able to see it and to have some way of determining that their motion is taking them closer to rather than farther from the money.

One such feedback mechanism, as Dumbledore’s Army notes, is getting fired. If you work for a business, and your actions are consistently inconsistent with their goal of profit maximization, they will fire you. In terms of pathfinding, the employee might be thought of like a blood cell in an organism: the goal of the design of the system is to make sure that the narrow path that the blood cell is incentivized, or tends, to go is also the path that suits the overall system’s needs. Eliminating unhelpful parts is a strict but powerful way of solving problems.

If bureaucracies are poorly incentivized, then there are a few ways we can understand this proposition:

  1. Bureaucracies are tightly incentivized to do bad things. This is a system that is well-designed in an objective engineering sense, but what it is designed to do is undesirable. A killing squad a la certain cliche depictions of Nazis may be such a system. But this isn’t what anyone seems to be talking about, and it certainly isn’t what I’m trying to understand either.

  2. Bureaucracies are not tightly incentivized. By this I mean that bureaucracies do not provide narrow paths for their subunits: there are too many degrees of freedom. The paths that employees follow, the activities that they engage in, are consequently not always optimal from the system’s perspective even though the employees are behaving reasonably within their individual circumstances. This relates to the issue of being able to fire someone: firing someone is a very sharp way of cutting off paths.

  3. Bureaucracies have no clear goal, such that a tightly incentivized system may nevertheless contradict itself. Imagine a body that does an excellent job of “incentivizing” blood cells to flow down the arteries of the left leg and does an excellent job of “incentivizing” blood cells to not flow down the arteries of the right leg. This is a well-incentivized system that also is arguably not a coherent system. The bureaucracy’s “mission motive” is unclear, or self-contradictory as a whole.

Ignoring possibility 1, possibilities 2 and 3 are what result in the “Bwuh?!” systems we are trying to understand. At an extreme, someone who literally cannot be fired or disciplined in any way does not have to do their job. As a consequence, bureaucracy may be unable to initiate some step in a process meant to achieve its goals. From the outside, such a system will be inexplicably slow and cumbersome, frequently forced to route around itself for no apparent reason.

Similarly, a bureaucracy might be tasked with incompatible goals. For example, a school system might be meant to educate children, but it might be incentivized to have them perform well on tests to such a degree that achieving the latter comes at the expense of the former. Such a system might talk earnestly and sincerely about the value of education while also predictably failing to educate, creating perpetual confusion among outside observers.

So maybe we could think about incentives as the physical structure that defines a clear motive, be it a tendency to move toward money, sunlight, or whatever else. What a system “wants” to do is what it consistently tends to do even when entropy would otherwise dictate that the outcome is highly unlikely. A consistent state that is consistently achieved despite being a priori unlikely to occur is a preferred state, and a system that tends to a preferred state is incentivized by it, meaning there is some physical structure causing said tendency.

With this reduction of the concept of an incentive, the question is, how can we better incentivize bureaucracies?

And so I ask you: is there a way to incentivize bureaucracies as strongly, clearly, and self-consistently as a textbook firm is without turning a bureaucracy into a textbook firm?

Selection Effects

The next most common analysis was on the role of selection effects. AllAmericanBreakfast says,

I favor selection-based arguments in this area. Businesses that happen to be profit-maximizing tend to survive, along with their leadership. This doesn’t mean that leaders always believe that every decision they make is a profit-maximizing decision, and the important thing is the overall trend. Many mistakes are made, and there’s a lot of random noise in the system that can defeat even the wisest of profit-maximizing strategies.

To understand the behavior of bureaucracies, we need to understand what causes them to survive. I think that blame avoidance is a stronger argument than you’re making it out to be.

Short-term budget (or power) maximization can fail to explain their behavior, because a swollen bureaucracy that’s mis-managing its money or power is a ripe target for politicians. For survival, bureaucracies should aim to please the electorate, or at least be seen as less blameworthy than some other organization.

Your argument about the CDC and the rental market conflates responsibility-minimization with blame-minimization. A bureaucracy that reduces its responsibility to zero is dead. Having responsibilities is central to bureaucratic survival. And bureaucracies don’t have perfect control over what responsibilities are allocated to them. The CDC couldn’t necessarily control the amount of responsibility thrust upon them in the pandemic. They were trying to avoid blame for excessive COVID deaths, and in order to do that they assumed temporary responsibility over the rental market (and, predictably, rid themselves of it when the negative consequences manifested).

I think the point that survival necessitates a certain degree of being able to demonstrate value is a good one. Perhaps an “ideal” bureaucracy would do nothing and simply soak up some salaries for its employees, but it will be shut down if it has nothing to offer politicians or interest groups who can influence politicians.

Matthew Barnett:

Here’s some background first. Firms are well-modeled as profit maximizers because, although they employ internal bureaucracies to achieve their ends, bureaucracies that are bad at the task of making investors money are either selected out over time due to competition, or are pruned due to higher-up managers having relatively strong incentives to fire people who are not making investors money. This model relies on an assumption that investors themselves are usually profit-maximizers, which seems uncontroversial.

By contrast, government bureaucracies lack the pressures of competition, though they can (though less commonly) be subject to pruning, especially at the higher levels. I can think of two big forces shaping the motivations of government bureaucracies: the first being internal pressures on workers to “do work that looks good” to get promoted, and the second being a pressure to conform to the desires of the current president’s political agenda (for people at the top of the bureaucracy).

Selection effects are like the other side of the coin to incentives. If incentives are structures that cause a system to exhibit tendencies, then selection effects are structures that limit which tendencies can be observed. So in a profit-maximizing firm, for example, maybe there is a tendency to open early and close late because you make more money that way. Minimum-wage workers and high-level executives alike would mostly prefer to come in at noon and leave at 3pm. But such firms would be selected out.

If bureaucracies generally do not get shut down, and individuals generally do not lose their jobs, the they can have inconvenient hours at offices in inconvenient locations. They can make lots of rules and forms that make life difficult for the very people that they serve. Even if no bureaucrat maliciously wants to make things difficult for anyone, in the absence of forces that weed out such inconveniences, they will only ever increase in prevalence.

Similarly, in the absence of positive selection effects, a good idea at one bureaucracy will not spread to others. Whereas every firm has had to adapt to the Internet or face extinction, for example, bureaucracies may often tend to be slower to adopt principles of good web design or paperless service.

Selection effects offer a clear explanation for why bureaucracies are often confusing. If a confusing system—a system that does not do a good job of tightly constraining itself to follow a path toward a set of mutually consistently goals—is much easier to create than a non-confusing system, for basic entropy reasons if nothing else, then confusing systems will tend to proliferate over non-confusing ones in the absence of selection effects against the former.

Structure preservation

Let me try to bring the twin concepts of incentives and selection effects together into a third concept: the idea of a structure-preserving system. To preserve the structure of something is to behave in such a way that your behavior is a model of the other thing, allowing us to deduce things about it by studying you.

In economics, a familiar example of a structure-preserving system is a utility-maximizer. For an entity to be rational, its utility function, which determines its behavior, must preserve the structure of its preference order, so that an apple higher in preference than an orange must also be higher in utility, meaning that the system—a human body, in this case—shows a greater tendency to eat apples rather than oranges, all else held even. Conceivably, therefore, it is possible to watch a rational economic agent’s behavior and deduce the structure of its preferences because said structure is preserved by the behavior of the agent.

Relatedly, for an incentive to be have actual physical meaning—for it to cause a measurable tendency in a system—it must interact with the system so as to preserve some structure of the thing-providing-the-incentive. Direction with respect to the system is a structure frequently preserved. For example, plants tend to grow in the direction of the sun, and people tend to spray bug spray in the direction of bugs. Even though we act so as to destroy bugs, we do so by taking action that is a function of facts about bugs, such as where they are relative to us. Thus, it is possible for an outside observer to whom the bugs are invisible to watch our behavior and deduce, at the very least, the direction of the bugs relative to us. The ability for an outside observer to watch us to deduce at least some properties of something else is what it means for our behavior to preserve some of that other thing’s structure.

Incentives only make sense in terms of structure-preservation. To be incentivized by money is to preserve structure about which jobs pay the most, what kinds of educational choices tend to allow entry into those jobs, etc. One also needs to behave in such a way that captures the idea that $100 is twice that of $50, and so on. Someone whose behavior fails to preserve the structure of where money is and how much more money is over here relative to over there is someone who is not really incentivized by money.

Selection effects, similarly, limit observable systems to those which preserve a narrow range of structures. Anything that evolves in the ocean, for example, must behave in such a way as to preserve the structure of water. As a result of such pressures, we can expect systems in a given environment to consistently preserve a consistent set of structures.

The economic systems that we understand with relative clarity are systems that preserve a particular structure clearly and consistently. We can easily tell that individual humans are following their preferences, firms are following money, and politicians are following votes. Even if I can’t see profit opportunities myself—I have no specific knowledge about why, e.g., raspberry Pop-Tarts, made the way they are, sold at the price they’re sold at, are profitable—I can nevertheless watch Kellogg’s move in a particular direction and deduce the presence of profit in that direction, just like I can see a human spray bug spray in a particular direction and deduce the presence of bugs in that direction. Similarly, if a politician suddenly starts showing off many pictures of themselves hugging a bald eagle, I don’t have to understand why this earns them votes to deduce that this behavior preserves some of the structure of voters.

The tendencies that people, firms and politicians exhibit reflect their incentives, and the reason that we only ever see such strongly incentivized systems is because of the selection effects at play. Firms that fail to make money go out of business; politicians that fail to win votes lose office; people who ignore the structure of their environment get killed by their environment, or at the very least fail to consistently acquire food and water. Selection effects themselves are a kind of physical structure that determine which tendencies tend to be exhibited. A firm that maximizes office space is a well-incentivized system, but it will nevertheless be destroyed by the selection effects: that particular tendency does not tend to exist.

Some commenters observed that bureaucracies do not tend to exhibit only a narrow set of tendencies. Here is rur:

Assuming the bureaucracy is hierarchical, the maximizer may vary depending on the level. At the lower levels, a process-maximizer may best model behavior. Map versus territory. Akin to a mis-aligned AI paperclip-maximizer, reward is based on adherence to process, results do not matter. Mid-hierarchical levels are budget-maximizers. Body-count may be a surrogate. The bureaucratic topology that emerges and morphs at these mid-levels is where things become chaotic for the higher levels. Perhaps entrenchment, power, consensus, and hubris-maximizers join the dance. Predicting behavior at these higher levels may be more a matter of profiling than modeling. Regardless, the bureaucracy as a whole is more like an oil tanker than a jet ski. Its behavior in the near term is rather obvious.

Phil Scadden:


I worked in and with a few bureaucracies in NZ and I very much doubt there is a single model to explain or predict behavior, because multiple utilities and motivations are present. They are plagued (as are private companies) by the levels problem where information between levels of management can get twisted by differing motivations and skill level. As other commentators have pointed out, upper levels of the management can be extremely risk adverse because they crucified for mistakes and unrewarded for success. While “blame-minimization” might seem appropriate, there are other factors at play. Large among them would be motivation. Some bureaucrats are empire-builders and their utility function is ever-increasing areas of control, (career administrators in middle-management role) but others got into the game in the first place because they wanted to change the world, and the tools of government seemed like a good place to find power. With that kind of motivation, they tend to rise quickly and I see a fair no. of them in high positions, especially in education, health, welfare. They feel the forces of blame, but are individually motivated to make change. Good luck predicting outcomes there.
The other prediction problem would relate to where in the organization that a decision is made. The more technical the decision, the more likely that is being made at low level in organization among the technocrats. The decision may still have to percolate up the levels which it may be misunderstood or subtly reframed to make a middle manager look good, (another predictability problem) but mostly I would expect such decisions to reflect perceived technical utility. (eg best timing for a booster vaccination).

michaelkeenan quotes from a lengthy post by Dominic Cummings as to why structure-preservation in a bureaucracy is so unlikely, which is certainly worth reading if you want to peer into the nuts and bolts of the system. And shminux points out what this means for a potential success condition:

A bureaucracy works well when every person has a vested interest in the shared success more than in whatever Goodhart incentives tend to emerge in the bureaucratic process. An essential (but by no means sufficient) part of it is the right amount of slack. With too little slack the Goodhart optimization pressures defeat all other incentives.

This mirrors my own experiences that the quality of parts of a bureaucracy can depend strongly on the personal characteristics of the members. People who want to make things work can overcome great adversity, and people who don’t can fail to hit a bullseye the size of a football field.

So let’s try to make this clear. For a person to be utility-maximizing means that they act so as to preserve their own preferences. So if we see them reach for a box of Pop-Tarts at the grocery store, we predict that we can go back and see that this reaching-behavior preserves the preference-structure within the brain, meaning that, if we had some sort of appropriate measuring apparatus, we would expect to see that the arm’s reach was an instruction from the brain; the arm functions so as to fulfill the brain’s orderly, self-consistent predictions. Finding a signal-passing connection between the brain and the arm, like nerves extending between them, would bolster this theory.

A firm is profit-maximizing, meaning that they sell what their customers spend money on. So if they sell Pop-Tarts, we predict that customers tend to buy Pop-Tarts, a prediction we can confirm by standing in the store and noticing that customers do indeed come in and buy Pop-Tarts far more often that people who have no such tendency could ever be expected to do by random chance.

And while the median voter is a statistical construct and not a precisely identifiable figure, nevertheless we can see a politician do something and suppose that it preserves the structure of what the middle of their voters tend to vote for, which we can test for by figuring out approximately who that is and finding out what they tend to vote for.

What does the system do, why does it behave the way that it does? The answer, the general structure of economic explanation, is to say, “The system behaves so as to consistently preserve some consistent structure.” It does so because it is incentivized to do so—there are physical structures in place that cause the tendency to be achieved very consistently—and because of selection effects—there are physical structures in place that cause only a narrow set of tendencies to be achieved consistently.

When things tend to achieve the same narrow, self-consistent set of things over and over, we can eventually model them as if they “want” those things. The rare states consistently achieved are “preferred”. But psychologizing aside, what we’re really observing is merely a highly consistent set of mutually consistent outcomes over time. Hence people talk about evolution “wanting” us to reproduce, even though we know that evolution is a statistical tendency, not a psychology.

(But what, then, is a psychology?)

We may not observe such tendencies in a bureaucracy, in which case we will not be able to model them as consistently achieving a consistent and self-consistent set of goals. In other words, they will make us go “Bwuh?!” a lot. This will happen even when they strongly declare certain goals and often do seem to be trying to achieve them. I am pretty sure, for example, that most everyone working at the CDC really do want to minimize the harms of COVID-19, and many of the things done by the CDC are probably best explained in terms of attempting to achieve said minimization. Nevertheless, I do not feel like I can model the CDC as a harm-minimizer or anything else in particular.

Moral Mazes

A few commenters pointed out that bureaucracies seem almost designed to obfuscate. Viliam:

I would also expect some combination of: putting in the minimum effort, playing it safe, and optionally moral-maze behavior, and some form of rent seeking (e.g. taking bribes).

Pure blame minimization would motivate bureaucracies to reduce their jurisdiction, but expanding the jurisdiction provides more opportunities for rent seeking… if there is a standard way to make decisions about many things and yet carry no responsibility for their failure, I would expect bureaucracy to optimize for this.

Something like: Someone else is responsible for the success, but at every step they need to ask the bureaucracy for a permission; if the project fails because they didn’t get the permission, the person responsible is fully blamed regardless, because they should have found another solution.

And davidestevens links to a series of essays on The Office, one of the major themes of which is that workplaces depend on somewhat ambiguous hierarchies, as if we are actually predictably better off without some information and clarity on certain things.

This suggests a potential theory of bureaucracy as a system which we intend to not be tightly incentivize to preserve a particular narrow structure because we are sometimes better off not building clearly motivated systems.

Let me go back to the idea of degrees of freedom. Under normal circumstances, you’d think that economists prefer workers to be highly incentivized to produce value. But in some markets, it’s hard for customers to declare ahead of time what a valuable product is. One example is research: how do you know who to pay to do research, and how do you know what results you’re buying? Research, by definition, is finding out things we don’t already know. Why would someone ever buy an I-don’t-know-what?

There isn’t necessarily a great solution to this problem. But we can nevertheless be confident that some skilled researchers exist, and if we give them space and time and money to research, they will produce socially valuable things. So a second-best solution may be to create an entity called a tenured professor. This thing, like a bureaucrat, cannot be fired and has no extrinsic motivation to do much of anything. Yet, if they are intrinsically motivated, which we can possibly select for by forcing them to go through graduate school and produce lots of papers to get tenure, then they might produce very valuable research anyway, a predictable tendency in the absence of any structures outside of themselves that would seem to cause said tendency.

Maybe you don’t like tenure, I’m not saying it’s necessarily a great system or one without problematic tendencies. But I am saying there’s a sense in which we can perceive the potential value of such systems, of systems that don’t do what we want but may help us nevertheless.

Another example is the police. Theoretically, a police officer should behave so as to preserve the structure of the law, punishing any lawbreakers they observe. If a society has some bad laws on its books, however, we may not want to tightly incentivize police officers. For example, we may prefer that officers look the other way when someone is clearly hiding marijuana or drinking alcohol from a brown paper bag. Tightly incentivized police officers only work well when the law is tightly incentivized to only make and keep good laws.

But you can’t tell the police not to enforce the law. There must be plausible deniability at every level, which requires a system whose motives cannot be determined even when closely examined.

Again, I’m not saying that this system doesn’t have obvious drawbacks. But as per the general theory of second best, when your system is imperfect in multiple ways, moving any one part of it toward “perfection” while leaving the other parts unchanged may make you substantially worse off. If we have a tendency to ask for bad things to be preserved, or if there are some good things that we don’t know how to ask for, then we might not want a system that does a good job of preserving our structure, even if this requires that the system be very confusing to interact with.

So perhaps bureaucracies are selected for by forces that are trying to regulate some set of variables in an obfuscatory manner, with some obvious benefits and drawbacks accordingly. Similarly bureaucracies are incentivized to achieve a lack of internal clarity: Anyone who does too good of a job of giving clear directions gets fired, or at least transferred and sidelined, if firing someone really is too difficult.

Conclusion: What does an alien bureaucracy look like?

Bureaucracies are an important part of our society, but my interest in them is due to my interest in economic structure more generally. So when thinking about bureaucracy, I might encourage you to ask the question: what does an alien bureaucracy look like?

Here’s what I mean. If alien life does exist, although their tastes and preferences may be very different from ours, I nevertheless expect to be able to model them as rational utility-maximizers, for basically the same reasons that a physicist would expect to model them according to our physics. I would expect them to have an economy, and I would expect them to have something broadly analogous to profit-maximizing firms in the sense that they would have created resource-management systems that maximize some mutually commensurable quantity such that the systems can “talk” to each and coordinate easily, allowing the alien society as a whole to trade off between various allocations of scarce resources in optimal ways.

And while it’s less obvious that aliens would use a voting system per se to make collective decisions, if they did have a politician-based democracy, I would expect their politicians to be incentivized to get votes and to be selected on the basis of their success at doing so.

So these economic systems, although traditionally understood in human terms, may be thought of more generally as ways that complex systems behave under certain conditions, allowing us to take the human part out and focus on the abstract general relationships that really define the systems in question. If we can do so, then we might have found a “natural” economic structure, which consequently may be relatively simple to design and apply in many contexts.

But it is extremely non-obvious to me that aliens would have bureaucracies. Bureaucracies, it seems to me, really are a function of human psychology, such as the hypocritical way that we support laws that we do not want to be enforced. So obfuscatory systems may in fact be selected for in human societies, and the systems in question may be incentivized to avoid having a clear incentive structure.

And while this doesn’t clear up the “Bwuh?!” it does clear up the “Bwuh?!” of “Bwuh?!”—I am no longer confused about why it is that economics doesn’t have a clear, obvious, natural way of modeling the behavior of bureaucracy. And that itself seems like some kind of a step forward, for me at least.

So, if you’re not already sick of the subject, I’m curious to know what you think of that idea—or anything related that’s occurred to you on this subject.