An upcoming US Supreme Court case may impede AI governance efforts

According to various sources, the US Supreme Court is poised to rule on and potentially overturn the principle of “Chevron deference.” Chevron deference is a key legal principle by which the entire federal bureaucracy functions, being perhaps the most cited case in American administrative law. Basically, it says that when Congress establishes a federal agency and there is ambiguity in the statutes determining the scope of the agency’s powers and goals, courts will defer to the agency’s interpretation of that scope as long as it is reasonable. While the original ruling seems to have merely officially codified the previously implicit rules regarding the legal authority of federal agencies, this practice seems likely to have increased the power and authority of the agencies because it has enabled them to act without much congressional oversight and because they tend to interpret their powers and goals rather broadly. I am not a legal expert, but it seems to me that without something like Chevron deference, the federal bureaucracy basically could not function in its contemporary form. Without it, Congress would have to establish agencies with much more well-specified goals and powers, which seems very difficult given the technocratic complexity of many regulations and the fact that politicians often have limited understanding of these details.

Given that the ruling has expanded the regulatory capacity of the state, it seems to be opposed by many conservative judges. Moreover, the Supreme Court is currently dominated by a conservative majority, as reflected by the recent affirmative action and abortion decisions. The market on Manifold Markets is trading at 62% that they will do so, and while only two people have traded on it, it altogether seems pretty plausible that the ruling will be somehow overturned.

While overturning Chevron deference seems likely to have positive effects for many industries which I think are largely overregulated, it seems like it could be quite bad for AI governance. Assuming that the regulation of AI systems is conducted by members of a federal agency (either a pre-existing one or a new one designed for AI as several politicians have suggested), I expect that the bureaucrats and experts who staff the agency will need a fair amount of autonomy to do their job effectively. This is because the questions relevant AI regulation (i. e. which evals systems are required to pass) are more technically complicated than in most other regulatory domains, which are already too complicated for politicians to have a good understanding of. As a result, an ideal agency for regulating AI would probably have a pretty broad range of powers and goals and would specifically be empowered to make decisions regarding the aforementioned details of AI regulation based on the thoughts of AI safety experts and not politicians. While I expect that it will still be possible for such agencies to exist in some form even if the court overturns Chevron, I am quite uncertain about this, and it seems possible that a particularly strong ruling could jeopardize the existence of autonomous federal agencies run largely by technocrats.

The outcome of the upcoming case is basically entirely out of the hands of the AI safety community, but it seems like something that AI policy people should be paying attention to. If the principle is overturned, AI policy could become much more legally difficult and complex, and this could in turn raise the value of legal expertise and experience for AI governance efforts.