I don’t think I see the problem. Chevron deference is, as you say, about whether courts defer to agencies interpretations statutes. It comes up when an agency thinks one interpreation is best, and a court thinks a different interpretation is the best reading of the statute, but that the agencies prefered interpreation is still a plausible reading of the statute. In that case, under Chevron, the court defers to the agencies interpreation. Do away with Chevron, and the court will follow what it thinks is the best reading of the statute. This is, I should note, the background of what courts usually do and did before Chevron. Chevron is an anomaly.
In terms of implications, I think it is true that agencies will tend to interpret their mandates broadly, and so doing away with Chevron deference will, at the margin, reduce the scope of some agencies powers. But I don’t see how it could lead to the end of the administrative state as we know it. Agencies will still have jobs to do that are authorized statute, and courts will still let agencies do those jobs.
So what does AI regulation look like? If it looks like congress passing a new statute to either create a new agency or authorize an existing agency to regulate AI, then whether Chevron gets overturned seems irrelevant—congress is quite capable of writing a statute that authorizes someone to regulate AI, with or without Chevron. If it looks like an existing agency reading an existing statute correctly to authorize it to regulate some aspect of AI, then again, that should work fine with or without Chevron. If, on the other hand, it looks like an existing agency over-reading an existing statute to claim authority it does not have to regulate AI, then (1) that seems horribly undemocratic, though if the fate of humanity is on the line then I guess that’s ok, and (2) maybe the agency does it anyway, and it takes years to get fought out in court, and that buys us the time we need. But if the court ruling causes the agency to not try to regulate AI, or if the years long court fight doesn’t buy enough time, we might actually have a problem here. I think this argument needs more details fleshed out. What particular agency do we think might over-read what particular statute to regulate AI? If we aren’t already targeting a particular agency with arguments about a particular statute, and have a reasonable chance of getting them to regulate for AI safety rather than AI ethics, then worrying about the courts seems pointless.
I think you’re probably right. But even this will make it harder to establish an agency where the bureaucrats/technocrats have a lot of autonomy, and it seems there’s at least a small chance of an extreme ruling which could make it extremely difficult.
Harder, yes; extremely, I’m much less convinced. In any case, Chevron was already dealt a blow in 2022, so those lobbying Congress to create an AI agency of some sort should be encouraged to explicitly give it a broad mandate (e.g. that it has the authority to settle various major economic or political questions concerning AI.)
I don’t think I see the problem. Chevron deference is, as you say, about whether courts defer to agencies interpretations statutes. It comes up when an agency thinks one interpreation is best, and a court thinks a different interpretation is the best reading of the statute, but that the agencies prefered interpreation is still a plausible reading of the statute. In that case, under Chevron, the court defers to the agencies interpreation. Do away with Chevron, and the court will follow what it thinks is the best reading of the statute. This is, I should note, the background of what courts usually do and did before Chevron. Chevron is an anomaly.
In terms of implications, I think it is true that agencies will tend to interpret their mandates broadly, and so doing away with Chevron deference will, at the margin, reduce the scope of some agencies powers. But I don’t see how it could lead to the end of the administrative state as we know it. Agencies will still have jobs to do that are authorized statute, and courts will still let agencies do those jobs.
So what does AI regulation look like? If it looks like congress passing a new statute to either create a new agency or authorize an existing agency to regulate AI, then whether Chevron gets overturned seems irrelevant—congress is quite capable of writing a statute that authorizes someone to regulate AI, with or without Chevron. If it looks like an existing agency reading an existing statute correctly to authorize it to regulate some aspect of AI, then again, that should work fine with or without Chevron. If, on the other hand, it looks like an existing agency over-reading an existing statute to claim authority it does not have to regulate AI, then (1) that seems horribly undemocratic, though if the fate of humanity is on the line then I guess that’s ok, and (2) maybe the agency does it anyway, and it takes years to get fought out in court, and that buys us the time we need. But if the court ruling causes the agency to not try to regulate AI, or if the years long court fight doesn’t buy enough time, we might actually have a problem here. I think this argument needs more details fleshed out. What particular agency do we think might over-read what particular statute to regulate AI? If we aren’t already targeting a particular agency with arguments about a particular statute, and have a reasonable chance of getting them to regulate for AI safety rather than AI ethics, then worrying about the courts seems pointless.
I think you’re probably right. But even this will make it harder to establish an agency where the bureaucrats/technocrats have a lot of autonomy, and it seems there’s at least a small chance of an extreme ruling which could make it extremely difficult.
Harder, yes; extremely, I’m much less convinced. In any case, Chevron was already dealt a blow in 2022, so those lobbying Congress to create an AI agency of some sort should be encouraged to explicitly give it a broad mandate (e.g. that it has the authority to settle various major economic or political questions concerning AI.)
It might also make it easier. You can use the fact that Chevron was overruled to justify writing broad powers into the new AI safety regulation.