President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence specifies compute thresholds for training runs and computing clusters that, if exceeded, impose reporting requirements. If a training run exceeds 1026 floating point operations or, for a model trained mainly on biological sequence data, 1023 floating point operations, the company has to report information about physical and cybersecurity measures and red-teaming results. Additionally, the “acquisition, development, or possession” of a computing cluster capable of training models using more than 1020 floating point operations must be reported, along with the cluster’s location and computing power. My understanding of the executive order is that these thresholds are a stopgap until Cabinet members have defined “the set of technical conditions for models and computing clusters that would be subject to the reporting requirements.”
While the executive order doesn’t explain the reasoning behind the specific numbers of operations used as thresholds (see here for a reference to one possible source), the point is to set baselines for how much compute may be enough to produce potential dual-use foundation models that the Biden administration considers high-risk. But the thresholds are static, raising the question of whether the reporting mechanism is resilient to the fast pace of algorithmic efficiency gains. For example, the AI trends research organization Epoch AI found that for language models, the amount of compute needed to reach a set performance level is halved about every 8 months. So there seems to be a loophole in the executive order, where AI companies could leverage better algorithms to train models using less compute than the thresholds, yet achieve capabilities that would conventionally require exceeding the thresholds. This would circumvent the reporting requirements.
If there’s a loophole that needs closing, maybe reporting thresholds should be efficiency-adjustable based on the number of floating point operations needed to reach certain capability levels. Here’s how that might work:
Establish baseline conventional algorithms and hardware configurations.
Estimate the floating point operations needed to achieve specific capability levels using these baselines.
Set floating point operation thresholds at these capability levels.
Companies using more efficient methods would have their training compute “adjusted” to what it would have been using baseline methods. If this adjusted compute exceeds the thresholds, they would face reporting requirements.
For instance, if the baseline estimate is 1026 floating point operations to train a language model to a given capability level, and regulators determine that capabilities beyond this level pose higher risks, then 1026 could become the reporting threshold for language models. Companies using baseline methods exceeding 1026 operations, or companies with superior methods achieving equivalent capabilities with less compute, would both need to report.
A big challenge in this approach would be predicting compute requirements for various capability levels and adjusting those estimates as algorithms improve. Establishing agreed-upon baselines and methodologies might also prove difficult, and the baselines would need to be revised as advanced methods become standard practice.
In summary, the static compute thresholds in President Biden’s executive order are meant to enable oversight of potentially high-risk AI development. But the fast pace of algorithmic progress means compute-based reporting might not capture all high-risk scenarios. I wonder if a dynamic approach—specifically, tying thresholds to capability levels and adjusting for algorithmic efficiency—might provide a more resilient reporting mechanism as algorithms progress.
Can efficiency-adjustable reporting thresholds close a loophole in Biden’s executive order on AI?
Epistemic Status: Exploratory
President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence specifies compute thresholds for training runs and computing clusters that, if exceeded, impose reporting requirements. If a training run exceeds 1026 floating point operations or, for a model trained mainly on biological sequence data, 1023 floating point operations, the company has to report information about physical and cybersecurity measures and red-teaming results. Additionally, the “acquisition, development, or possession” of a computing cluster capable of training models using more than 1020 floating point operations must be reported, along with the cluster’s location and computing power. My understanding of the executive order is that these thresholds are a stopgap until Cabinet members have defined “the set of technical conditions for models and computing clusters that would be subject to the reporting requirements.”
While the executive order doesn’t explain the reasoning behind the specific numbers of operations used as thresholds (see here for a reference to one possible source), the point is to set baselines for how much compute may be enough to produce potential dual-use foundation models that the Biden administration considers high-risk. But the thresholds are static, raising the question of whether the reporting mechanism is resilient to the fast pace of algorithmic efficiency gains. For example, the AI trends research organization Epoch AI found that for language models, the amount of compute needed to reach a set performance level is halved about every 8 months. So there seems to be a loophole in the executive order, where AI companies could leverage better algorithms to train models using less compute than the thresholds, yet achieve capabilities that would conventionally require exceeding the thresholds. This would circumvent the reporting requirements.
If there’s a loophole that needs closing, maybe reporting thresholds should be efficiency-adjustable based on the number of floating point operations needed to reach certain capability levels. Here’s how that might work:
Establish baseline conventional algorithms and hardware configurations.
Estimate the floating point operations needed to achieve specific capability levels using these baselines.
Set floating point operation thresholds at these capability levels.
Companies using more efficient methods would have their training compute “adjusted” to what it would have been using baseline methods. If this adjusted compute exceeds the thresholds, they would face reporting requirements.
For instance, if the baseline estimate is 1026 floating point operations to train a language model to a given capability level, and regulators determine that capabilities beyond this level pose higher risks, then 1026 could become the reporting threshold for language models. Companies using baseline methods exceeding 1026 operations, or companies with superior methods achieving equivalent capabilities with less compute, would both need to report.
A big challenge in this approach would be predicting compute requirements for various capability levels and adjusting those estimates as algorithms improve. Establishing agreed-upon baselines and methodologies might also prove difficult, and the baselines would need to be revised as advanced methods become standard practice.
In summary, the static compute thresholds in President Biden’s executive order are meant to enable oversight of potentially high-risk AI development. But the fast pace of algorithmic progress means compute-based reporting might not capture all high-risk scenarios. I wonder if a dynamic approach—specifically, tying thresholds to capability levels and adjusting for algorithmic efficiency—might provide a more resilient reporting mechanism as algorithms progress.