Epistemic status: Probably a terrible idea, but fun to think about, so I’m writing my thoughts down as I go.
Here’s a whimsical simple AGI governance proposal: “Cull the GPUs.” I think of it as a baseline that other governance proposals should compare themselves to and beat.
The context in which we might need an AGI governance proposal:
Suppose the world gets to a point similar to e.g. March 2027 in AI 2027. There are some pretty damn smart, pretty damn autonomous proto-AGIs that can basically fully automate coding, but they are still lacking in some other skills so that they can’t completely automate AI R&D yet nor are they full AGI. But they are clearly very impressive and moreover it’s generally thought that full AGI is not that far off, it’s plausibly just a matter of scaling up and building better training environments and so forth.
Suppose further that enough powerful people are concerned about possibilities like AGI takeoff, superintelligence, loss of control, and/or concentration of power, that there’s significant political will to Do Something. Should we ban AGI? Should we pause? Should we xlr8 harder to Beat China? Should we sign some sort of international treaty? Should we have an international megaproject to build AGI safely? Many of these options are being seriously considered.
Enter the baseline option: Cull the GPUs.
The proposal is: The US and China (and possibly other participating nations) send people to fly to all the world’s known datacenters and chip production facilities. They surveil the entrances and exits to prevent chips from being smuggled out or in. They then destroy 90% of the existing chips (perhaps in a synchronized way, e.g. once teams are in place in all the datacenters, the US and China say “OK this hour we will destroy 1% each. In three hours if everything has gone according to plan and both sides seem to be complying, we’ll destroy another 1%. Etc.” Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.
All participating countries agree that this regime will be enforced within their spheres of influence and allow inspectors/representatives from other countries to help enforce it. All participating countries agree to punish severely anyone who is caught trying to secretly violate the agreement. For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes.
Participating countries can openly exit the agreement at any time (or perhaps, after giving one-month notice or something like that?). They just can’t secretly violate it. Also presumably if they openly exit it, everyone else will too.
Note that after the initial GPU destruction in the datacenters, the inspectors/representatives can leave, and focus all their efforts on new chip production.
That’s it.
The idea is that this basically slows the speed of AI takeoff by 10x (because compute will be the bottleneck on AI R&D progress around this time). And a slower takeoff is good! It’s great for avoiding misalignment/loss of control, which is in everyone’s interest, and it’s also great for avoiding massive concentration of power, which is in most people’s interest, and it’s also good for avoiding huge upsets in the existing balance of power (e.g. governments being puppeted by corporations, China or US having their militaries become obsolete) which is something that most powerful actors should be generically in favor of since they are currently powerful and therefore have more to lose in expectation from huge upsets.
The idea is that this basically slows the speed of AI takeoff by 10x
I think the slowdown is less than 10x because the serial speed of AI researchers will also probably be a limiting factor in some cases. 10x more compute gets you 10x more experiments and 10x more parallel researchers, but doesn’t get you 10x faster AIs. Maybe I think you get an 8x slowdown (edit: as in, 8x slowdown in the rate of research progress around superhuman AI researcher level averaged over some period), but considerably less than this is plausible.
The true slowdown in the world where this happens is probably greater, because it’d be taboo to race ahead in nations that went to such lengths to slow down.
An alternative idea is to put annual quotas on GPU production. The oil and dairy industries already do this to control prices and the fishing industry does it to avoid overfishing.
If the goal is to slow takeoff, then ideally you’d have some way to taper up the fraction destroyed over time (as capabilities advance and takeoff might have otherwise gone faster by default).
Separately, you could presumably make this proposal cheaper in exchange for being more complex by allowing for CPUs to be produced and limiting the number of GPUs produced rather than requiring GPUs to be destroyed. This only applies at the production side.
This is great. Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect. But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I’d trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
I fail to see how that’s an argument. It doesn’t seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
“Cull” is not in anyone’s action space. It’s a massive coordinated global policy. The only thing we can do is advocate for it. The OP specified that we wait until there’s popular will to do something potentially radical sbout AGI for pragmatic reasons. Culling now would be nice but is not possible.
First of all, I am glad you wrote this. It is a useful exercise to consider comparisons between this and other proposals, as you say.
I think all of the alternatives you reference are better than this plan aside from xlr8ion and (depending on implementation) the pause.
The main advantage of the other solutions is that they establish lasting institutions, mechanisms for coordination, or plans of action that convert the massive amounts of geopolitical capital burned for these actions into plausible pathways to existential security. Whereas the culling plan just places us back in 2024 or so.
It’s also worth noting that an AGI ban, treaty, and multilateral megaproject can each be seen as supersets of a GPU cull.
Wouldn’t it crash markets because people took on debt to fund chip production? Since, private players can’t reason when governments might interfere, they would not want to fund AI after this. Effectively making AI research a government project?
Why would any government that is not US / China agree to this? They would be worse off if AI is only a government project as their governments can’t hope to compete. If there are private players, then they can get a stake in the private companies and get some leverage.
All participating countries agree that this regime will be enforced within their spheres of influence and allow inspectors/representatives from other countries to help enforce it. All participating countries agree to punish severely anyone who is caught trying to secretly violate the agreement. For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes.
Participating countries can openly exit the agreement at any time (or perhaps, after giving one-month notice or something like that?). They just can’t secretly violate it. Also presumably if they openly exit it, everyone else will too.
If the preparations for this international agreement are not secret, I anticipate that this looks something like “Saudi Arabia announces that it will not be participating in the agreement, and all the companies that invested tens of billions of dollars each into GPUs move their server farms to SA before the agreement kicks in to avoid risking 90% of their extremely large capital investment”.
Of course the process of packing up all the GPUs, shipping them overseas, and setting up data centers and supporting infrastructure probably would delay AI progress by a month or two.
Also, it would be pretty rough to be Saudi Arabia harboring all that compute as a rogue state with the US and China against you. They could, for example, just demand that you join the treaty and destroy the compute, or else.
I am a bit confused about what 10x slowdown means. I assumed you meant going from eλt to e0.1λt on R&D coefficient, but the definition from the comment by @ryan_greenblatt seems to imply going from eλt to 0.1eλt (which, according to AI 2027 predictions, would result in a 6-month delay).
The definition I’m talking about:
8x slowdown in the rate of research progress around superhuman AI researcher level averaged over some period
Would sending or transferring the ownership of the GPUs to an AI safety organization instead of destroying them be a significantly better option?
PRO: - The AI safety organizations would have much more computing power
CON: - The GPUs would still be there and at risk of being acquired by rogue AIs or human organizations - The delay in moving the GPUs may make them arrive too late to be of use - Transferring the ownership has the problem that the ownership can easily be transferred back (nationalization, forced transfer, or sold back) - This solution requires verifying that the AI safety organizations are not advancing capabilities (intentionally or not)
Epistemic status: Probably a terrible idea, but fun to think about, so I’m writing my thoughts down as I go.
Here’s a whimsical simple AGI governance proposal: “Cull the GPUs.” I think of it as a baseline that other governance proposals should compare themselves to and beat.
The context in which we might need an AGI governance proposal:
Suppose the world gets to a point similar to e.g. March 2027 in AI 2027. There are some pretty damn smart, pretty damn autonomous proto-AGIs that can basically fully automate coding, but they are still lacking in some other skills so that they can’t completely automate AI R&D yet nor are they full AGI. But they are clearly very impressive and moreover it’s generally thought that full AGI is not that far off, it’s plausibly just a matter of scaling up and building better training environments and so forth.
Suppose further that enough powerful people are concerned about possibilities like AGI takeoff, superintelligence, loss of control, and/or concentration of power, that there’s significant political will to Do Something. Should we ban AGI? Should we pause? Should we xlr8 harder to Beat China? Should we sign some sort of international treaty? Should we have an international megaproject to build AGI safely? Many of these options are being seriously considered.
Enter the baseline option: Cull the GPUs.
The proposal is: The US and China (and possibly other participating nations) send people to fly to all the world’s known datacenters and chip production facilities. They surveil the entrances and exits to prevent chips from being smuggled out or in. They then destroy 90% of the existing chips (perhaps in a synchronized way, e.g. once teams are in place in all the datacenters, the US and China say “OK this hour we will destroy 1% each. In three hours if everything has gone according to plan and both sides seem to be complying, we’ll destroy another 1%. Etc.” Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.
All participating countries agree that this regime will be enforced within their spheres of influence and allow inspectors/representatives from other countries to help enforce it. All participating countries agree to punish severely anyone who is caught trying to secretly violate the agreement. For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes.
Participating countries can openly exit the agreement at any time (or perhaps, after giving one-month notice or something like that?). They just can’t secretly violate it. Also presumably if they openly exit it, everyone else will too.
Note that after the initial GPU destruction in the datacenters, the inspectors/representatives can leave, and focus all their efforts on new chip production.
That’s it.
The idea is that this basically slows the speed of AI takeoff by 10x (because compute will be the bottleneck on AI R&D progress around this time). And a slower takeoff is good! It’s great for avoiding misalignment/loss of control, which is in everyone’s interest, and it’s also great for avoiding massive concentration of power, which is in most people’s interest, and it’s also good for avoiding huge upsets in the existing balance of power (e.g. governments being puppeted by corporations, China or US having their militaries become obsolete) which is something that most powerful actors should be generically in favor of since they are currently powerful and therefore have more to lose in expectation from huge upsets.
Minor point:
I think the slowdown is less than 10x because the serial speed of AI researchers will also probably be a limiting factor in some cases. 10x more compute gets you 10x more experiments and 10x more parallel researchers, but doesn’t get you 10x faster AIs. Maybe I think you get an 8x slowdown (edit: as in, 8x slowdown in the rate of research progress around superhuman AI researcher level averaged over some period), but considerably less than this is plausible.
The true slowdown in the world where this happens is probably greater, because it’d be taboo to race ahead in nations that went to such lengths to slow down.
In some cases, sure. Especially perhaps once you are in the vastly superintelligent regime.
An alternative idea is to put annual quotas on GPU production. The oil and dairy industries already do this to control prices and the fishing industry does it to avoid overfishing.
If the goal is to slow takeoff, then ideally you’d have some way to taper up the fraction destroyed over time (as capabilities advance and takeoff might have otherwise gone faster by default).
Separately, you could presumably make this proposal cheaper in exchange for being more complex by allowing for CPUs to be produced and limiting the number of GPUs produced rather than requiring GPUs to be destroyed. This only applies at the production side.
This is great.
Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect.
But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I’d trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
Because no one will agree to do it.
I fail to see how that’s an argument. It doesn’t seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
“Cull” is not in anyone’s action space. It’s a massive coordinated global policy. The only thing we can do is advocate for it. The OP specified that we wait until there’s popular will to do something potentially radical sbout AGI for pragmatic reasons. Culling now would be nice but is not possible.
First of all, I am glad you wrote this. It is a useful exercise to consider comparisons between this and other proposals, as you say.
I think all of the alternatives you reference are better than this plan aside from xlr8ion and (depending on implementation) the pause.
The main advantage of the other solutions is that they establish lasting institutions, mechanisms for coordination, or plans of action that convert the massive amounts of geopolitical capital burned for these actions into plausible pathways to existential security. Whereas the culling plan just places us back in 2024 or so.
It’s also worth noting that an AGI ban, treaty, and multilateral megaproject can each be seen as supersets of a GPU cull.
Wouldn’t it crash markets because people took on debt to fund chip production? Since, private players can’t reason when governments might interfere, they would not want to fund AI after this. Effectively making AI research a government project?
Why would any government that is not US / China agree to this? They would be worse off if AI is only a government project as their governments can’t hope to compete. If there are private players, then they can get a stake in the private companies and get some leverage.
If the preparations for this international agreement are not secret, I anticipate that this looks something like “Saudi Arabia announces that it will not be participating in the agreement, and all the companies that invested tens of billions of dollars each into GPUs move their server farms to SA before the agreement kicks in to avoid risking 90% of their extremely large capital investment”.
Of course the process of packing up all the GPUs, shipping them overseas, and setting up data centers and supporting infrastructure probably would delay AI progress by a month or two.
Also, it would be pretty rough to be Saudi Arabia harboring all that compute as a rogue state with the US and China against you. They could, for example, just demand that you join the treaty and destroy the compute, or else.
I am a bit confused about what 10x slowdown means. I assumed you meant going from eλt to e0.1λt on R&D coefficient, but the definition from the comment by @ryan_greenblatt seems to imply going from eλt to 0.1eλt (which, according to AI 2027 predictions, would result in a 6-month delay).
The definition I’m talking about:
Would sending or transferring the ownership of the GPUs to an AI safety organization instead of destroying them be a significantly better option?
PRO:
- The AI safety organizations would have much more computing power
CON:
- The GPUs would still be there and at risk of being acquired by rogue AIs or human organizations
- The delay in moving the GPUs may make them arrive too late to be of use
- Transferring the ownership has the problem that the ownership can easily be transferred back (nationalization, forced transfer, or sold back)
- This solution requires verifying that the AI safety organizations are not advancing capabilities (intentionally or not)