Meta-Regulations

Link post

The United States has a lot of regulations:

So does the EU.

Each body has, no matter how you count them, tens of thousands of rules to be followed, across all the domains they are responsible for.

This does not have to be a problem. The world is a complicated place, and the number of fields, industries, subjects, topics, and magisteria a government might rightfully want rules for is large.

That being said, there are problems with laws and regulations, ranging from unintended consequences and ratchet effects to slowing down innovation and preventing the construction of everything from houses and green energy to nuclear power.

The Problems With Regulations And Laws

The Ratchet

A big problem with regulations—and laws in general—are that they tend to function like a ratchet.

Once a law is passed, it’s a law, and the same goes for regulations. They don’t get old, they don’t fade, and they don’t die.

In other words, the number of laws and regulations tends to increase monotonically, including the old, the outdated, and the silly.

This creates two problems:

The pace of innovation and construction of new things slows down over time, as more and more rules and bureaucracy are implemented, and

The body of rules grows until every action breaks some rule, at which point the de facto law becomes the subset of laws which are enforced—a decision made by those who enforce the law, giving them arbitrary power over citizens. In other words, when everyone is guilty of something, anyone can be punished for anything.

The Tech Debt

In software development, there’s a concept called Tech Debt. When a company or developer writes software, they’re often under a deadline or dealing with current problems, and so the software they create is made hastily, to solve the immediate problem, and without sufficient tests.

The result is a pile of software that grows more complicated and difficult to work with over time, as more and more one-off solutions are implemented and decisions to prioritize current crises over future needs are made. The pace of development slows, the code becomes harder to work with or fix, and eventually the whole thing has to be thrown out because it finally breaks in a way that can’t be fixed.

Dealing with Tech Debt is practically a field of software engineering by itself, because it genuinely matters—the livelihood of many companies depends on their software continuing to function, both now and in the future.

Laws and regulations, on the other hand, have no such constraint.

There is no automated process by which old, outdated, or stupid laws are disposed of, and without that it falls to actual legislators to repeal laws.

Who is going to spend political capital repealing dumb laws, that could be better spent fulfilling campaign promises or riding the current political hobbyhorse?

And so laws and regulations pile up, inevitably becoming a massive swamp that only lawyers and experts can navigate—thus making everything that has to do with laws/​regulations (most things, at least in the physical world in the US) increasingly difficult and costly over time.

The Incentives

It would, in general, be a mistake to blame a specific legislator or regulator for these problems. Pointing fingers and assigning blame is a cherished human tradition, but I have yet to see it actually fix anything in recorded history.

No, the real problem is the incentives that legislators and regulators face. Think about it—what are these people rewarded for, and what are they punished for?

When was the last time a legislator was cheered for removing a stupid law, except when it happens to be a partisan issue?

Legislators are expected to and rewarded for making laws, not repealing them. The closest they might get is voting against a new law, except that usually winds up being a partisan issue anyway.

Regulators have it worse. Legislators undoubtedly have repealed laws in the past; when has regulation been completely removed, ever?

The issue is that regulation tends to be about safety—construction is regulated so that buildings are safe, food manufacturing is regulated so people don’t get poisoned, and so on. Which means that a regulator is rewarded for (and perhaps more importantly, can always justify) adding more regulations.

Can’t be too careful, right? What if they don’t add the regulation, and someone gets hurt? What if someone dies?

On the other hand, if the regulator didn’t add the regulation (or worse, removed existing regulation), and someone gets hurt or killed—well, that sounds like a lawsuit, doesn’t it?

So regulators are incentivized only to add regulation, never to repeal or simplify it, no matter how much regulation currently exists or how stupid it is. Regulators are never rewarded for the pace of construction or innovation, only punished for when things go wrong. Why shouldn’t they just keep adding regulations to everything?

Possible Meta-Regulations

In order to correct this problem, I suggest that we—you guessed it—add regulations!

Specifically, I think that there ought to be laws and regulations governing how laws and regulations work—meta-regulations, if you will. These meta-regulations would hopefully help to correct some of the above problems.

To be clear, I don’t know if these would work, and I don’t think all of them should necessarily be applied at the same time—but I think that each of them could be a serious boon to our government in general.

Within The Overton Window

I’ll start with the ideas I think are actually possible, as in, could feasibly happen.

Sunset Clauses

For those unaware, a sunset clause is an expiration date on a law. When the date comes up, the law is repealed by default unless the relevant legislative body votes to renew it.

Laws with sunset clauses have a long history, and some already exist or have expired, making this the most realistic meta-regulation available (indeed, the Texas Legislature has something like the following already).

To be specific, I would advocate that:

Every law passed by Congress—and every regulation made by the federal government—must include a sunset clause of no more than 30 years.

This would be a radical change to how our government functions, but I believe it would both a) make sense philosophically and b) have a large, tangible benefit.

Part of the entire idea of a democracy or republic is that government is only legitimate when it comes with the consent of the governed—and yet no one consented to the governance or laws made before they were born!

As for the benefit, it creates an automatic process for removing old, outdated, and useless laws from the books, which would clean up a considerable amount of the Tech Debt by default. It also shifts the status quo from keeping all laws and regulations to only keeping the ones that are relevant to the current day, which should help.

Cost-Benefit Analysis

The US Congressional Budget Office (CBO) is required to estimate the costs of almost every bill that congress is seriously considering passing.

What they are not required to do, on the other hand, is estimate the benefits of a given bill, whether in increased GDP, future tax income, deficit reduction, or any non-monetary effect.

Many of us are taught as children how to make lists of pros and cons as a tool to help us make important decisions—and yet the people making our laws are only required to look at the costs of a law, not at the full context of its effects.

I would advocate that:

For all laws and regulations, a full cost-benefit analysis must be done and made public.

While there are plenty of things in the world that can’t be quantified, our government should at least be responsible for considering what can be. (Note that I’m not advocating that every law or regulation must have a larger quantifiable benefit than cost—just that the analysis is done, and made public.)

I believe that this would force legislators and regulators to consider the full context of their decisions in a way that they currently don’t have to, which should hopefully improve their decision-making. Since half of this process already exists, I think it would be straightforward, if not easy, to implement the other half (at least for laws).

It’s also a matter of simple common sense.

Outcomes, Not Methods

The free market functions at its best when given goals or criteria to meet, and left to figure the methods out itself. That is the heart of innovation: given a what, figure out the how.

A good regulation ought to work with the free market by specifying the what, and leaving the how to innovators. Building codes can function this way, specifying certain levels of earthquake resistance or wind forces a building must be able to endure, and leaving the how in the hands of builders.

Other regulations, however, specify the how, leaving no room for improvement or innovation. This leads to stagnation.

A primary example of the latter is the Nuclear Regulatory Commission’s As Low As Reasonably Possible doctrine for radiation exposure, which functionally destroyed new nuclear power in the US for decades.

This is also an example of a regulation with an unclear or changing meaning, which is difficult to adapt to and easy to litigate against, again slowing down innovation over time and tying entire industries up in red tape.

I would advocate that:

A regulation or law must pertain to a desired outcome, not a specific method.

A good regulation is usually an attempt to mitigate or eliminate a negative externality of a free market—it should do that by setting a straightforward, clear outcome that the market must meet, and then leaving the how to the market.

Outside The Overton Window

The previous three meta-regulations all have precedent in existing laws and regulations.

The next three, so far as I am aware, do not.

Consider them more speculative, suggestions for counteracting the problems with incentives outlined above.

A Regulation Cap

One of the issues highlighted above is that regulators are only ever incentivized to add new regulations, never to repeal them. Legislators face similar incentives (with the exception of partisan legislation), but for this suggestion the focus will be on regulators.

One way to change these incentives would be to mandate a maximum number of regulations a given agency/​regulator is allowed to create at any given time, a cap on the total number of regulations.

With a maximum in place, once that maximum is reached, regulators will face a different set of incentives than their current, misaligned ones. It would only be possible to add a new regulation by removing an existing one.

This would force regulators to genuinely consider the costs and benefits of each regulation alongside the proposed regulation, choosing only the regulations that achieve the most effect for their cost.

The Caveat: I freely admit that this cap would have to be very carefully executed, lest it be dodged or gamed by regulators. Having a maximum number of regulations might just make each regulation ridiculously long and complicated. A maximum word count for all regulations might solve that problem, but it would create other problems.

I would advocate that:

The total number of regulations a given agency is allowed to pass be no more than some specific number, and each regulation must be no more than some specific number of words or mandates.

By constraining regulators to a finite number of regulations, the most valuable regulations would hopefully be kept, while also minimizing the amount of pointless red tape.

Repeal Before Regulation

“Before passing a new law, however, one should always ask, ’Can we accomplish the same end by repealing—or liberalizing—an existing law?”

- Bryan Caplan, Repealing Political Discrimination

Given that the purpose of meta-regulations is to reduce the complexity and tech debt of laws and regulations, Bryan Caplan’s suggestion above is worth adding to the list.

I would advocate that:

If the purpose of a new law or regulation can be accomplished via the repeal or liberalization of an existing law or regulation, the latter must be preferred, with a new law or regulation only implemented if the former has been tried and failed.

The Caveat: This might have far-reaching consequences that I can’t foresee—for instance, it might significantly slow down any lawmaking as everyone has to check the proposed law against every existing law. I’d be willing to risk those consequences, however, since the primary consequence would be a reduction in the number of laws and regulations.

Error Budgets

There is a fundamental tension in software development between software developers and software operators—that is, between those who are tasked to make the system better and those who are tasked to keep the system running.

This tension exists because of the possibility of error—any change to the system, including ones designed to improve it, poses the risk of damaging the system or taking it offline.

The field of software engineering that manages this tension (Devops) has a tool for managing this tension, called the error budget, which is designed to maximize both development/​innovation while also making sure the system is kept running as much as possible.

Definition And Motivation

An error budget is a finite amount of mistakes, agreed upon ahead of time, that a system is allowed to have per amount of time. A given application may be allowed five mistakes per year, for instance.

Two states may then exist for the system, with the following consequences:

The system has not yet reached its error budget for the year, in which case development/​changes/​innovation may proceed at full speed ahead.

The system has reached its error budget for the year, in which case no changes will be permitted (except for emergencies) until the year is up, and the error budget resets.

This balances the incentives of everyone involved—those making changes can go ahead and do so, while aware that they have some number of mistakes in the budget, and once those mistakes have been made, those who want to keep the system running are given priority in the decision-making process.

What is key to understanding how error budgets incentivize innovation is that the budget is supposed to be used up every time.

In other words, if the budget wasn’t used up, then changes weren’t being made fast enough. The budget allows people to make mistakes while still encouraging them to innovate by precisely defining how many mistakes are tolerated during a given period of time.

Regulatory Failures In The Trade-off

Many regulatory agencies, similar to software engineering, face a trade-off between innovation and safety. The difference is that, for many of these agencies, lives are on the line, which causes them to err on the side of safety so much that they actually get people killed.

The FDA, for instance, is so safety-conscious that they delay approving life-saving medicine or vaccines.

For each decision the FDA makes, there are two death tolls:

The people killed because the FDA approved an unsafe medication.

The people killed because the FDA didn’t approve (or delayed the approval of) a life-saving medication. (This also includes all the life-saving medications that could have existed, if the process to bring them to market was less slow and less costly.)

However, the regulators of the FDA are only punished for (1), not (2). This leads them to err too far on the side of not-approving (or delaying) life-saving medication, and gets people killed.

The Error Budget

To correct this, as morbid as it may sound and as politically unpalatable as the idea is, I believe that an error budget (in this case, a number of deaths per year) is a good solution, perhaps the best one.

The fact of the matter is that people die, and the goal of a regulatory body serving the American people should not be to minimize only the deaths that can be laid directly at their feet, but to minimize the total deaths from causes (1) and (2).

Innovation must be incentivized and the total death toll must be minimized.

Thus, I propose:

Regulatory agencies governing safety, including but not limited to the FDA, EPA, NRC, and CDC, must adopt an error budget. A specified number of errors per year sourced from the domains of each agency must be calculated, and if that number is below the error budget, regulatory requirements must be loosened. Once the budget has been filled, regulatory requirements would be tightened and no risks would be taken until the following year.

The Caveat: The implementation of an error budget would be very different for each agency, and the idea would need to be tailored appropriately to each situation, but I believe that this would help regulatory agencies adopt policies and regulations that both incentivize innovation and minimize the total death toll.

Conclusion

Laws and regulations are a part of life, and have the potential to do a great deal of good. The way that the current systems work, however, incentivizes lawmakers and regulators to make decisions that are not good for the long-term health of those systems.

The above meta-regulations would, I hope, change those systems and incentives such that the laws and regulations produced better serve the people they govern.