I think I expect delays from regulation to not really substantially affect the time at which AI can cause an x-risk, whereas it does substantially affect when TAI is deployed broadly. I think it’s plausible that at the time AI x-risk happens, even in “slower” takeoffs, most of the economy is still not automated, even if contemporary AI could in theory automate it.
Huh, this seems surprising to me. I feel like regulation to (for example) tax GPUs, would have a pretty straightforward effect on prolonging timelines.
I meant specifically regulations preventing broad deployment of TAI and replacing jobs. Regulation slowing down development of xrisk level AI would in fact slow down xrisk but I expect that by default this is much harder to make happen.
I’m not sure. It depends greatly on the rate of general algorithmic progress, which I think is unknown at this time. I think it is not implausible (>10% chance) that we will see draconian controls that limit GPU production and usage, decreasing effective compute available to the largest actors by more than 99% from the trajectory under laissez faire. Such controls would be unprecedented in human history, but justified on the merits, if AI is both transformative and highly dangerous.
It should be noted that, to the extent that more hardware allows for more algorithmic experimentation, such controls would also slow down algorithmic progress.
A GPU tax would not apply in countries that don’t implement the tax. It suddenly gives competing companies able to design a training and inference accelerator, which is much simpler than a GPU, a large competitive advantage to design an accelerator and sell it in untaxed countries. See China and bitmain.
Bitmain is a chip designer that makes very high performance Bitcoin and Ethereum mining accelerators, and has moved into AI. The tasks are similar.
With effective compute for AI doubling more than once per year, a global 100% surtax on GPUs and AI ASICs seems like it would be a difference of only months to AGI timelines.
What is your source for the claim that effective compute for AI is doubling more than once per year? And do you mean effective compute in the largest training runs, or effective compute available in the world more generally?
“Effective compute” is the combination of hardware growth and algorithmic progress? If those are multiplicative rather than additive, slowing one of the factors may only accomplish little on its own, but maybe it could pave the way for more significant changes when you slow both at the same time?
Unfortunately, it seems hard to significantly slow algorithmic progress. I can think of changes to publishing behaviors (and improving security) and pausing research on scary models (for instance via safety evals). Maybe things like handicapping talent pools via changes to immigration policy, or encouraging capability researchers to do other work. But that’s about it.
Still, combining different measures could be promising if the effects are multiplicative rather than additive.
Edit: Ah, but I guess your point is that even a 100% tax on compute wouldn’t really change the slope of the compute growth curve – it would only move the curve rightward and delay a little. So we don’t get a multiplicative effect, unfortunately. We’d need to find an intervention that changes the steepness of the curve.
If the explicit goal of the regulation is to delay AI capabilities, and to implement that via taxes, seems like one could figure out something to make it longer. Also, a few months still seems quite helpful and would class as “substantially” in my mind.
I meant specifically regulations preventing broad deployment of TAI and replacing jobs. Regulation slowing down development of xrisk level AI would in fact slow down xrisk but I expect that by default this is much harder to make happen.
I think I expect delays from regulation to not really substantially affect the time at which AI can cause an x-risk, whereas it does substantially affect when TAI is deployed broadly. I think it’s plausible that at the time AI x-risk happens, even in “slower” takeoffs, most of the economy is still not automated, even if contemporary AI could in theory automate it.
Huh, this seems surprising to me. I feel like regulation to (for example) tax GPUs, would have a pretty straightforward effect on prolonging timelines.
I meant specifically regulations preventing broad deployment of TAI and replacing jobs. Regulation slowing down development of xrisk level AI would in fact slow down xrisk but I expect that by default this is much harder to make happen.
Agreed. Taxing or imposing limits on GPU production and usage is also the main route through which I imagine we might regulate AI.
What level of taxation do you think would delay timelines by even one year?
I’m not sure. It depends greatly on the rate of general algorithmic progress, which I think is unknown at this time. I think it is not implausible (>10% chance) that we will see draconian controls that limit GPU production and usage, decreasing effective compute available to the largest actors by more than 99% from the trajectory under laissez faire. Such controls would be unprecedented in human history, but justified on the merits, if AI is both transformative and highly dangerous.
It should be noted that, to the extent that more hardware allows for more algorithmic experimentation, such controls would also slow down algorithmic progress.
A GPU tax would not apply in countries that don’t implement the tax. It suddenly gives competing companies able to design a training and inference accelerator, which is much simpler than a GPU, a large competitive advantage to design an accelerator and sell it in untaxed countries. See China and bitmain.
Bitmain is a chip designer that makes very high performance Bitcoin and Ethereum mining accelerators, and has moved into AI. The tasks are similar.
With effective compute for AI doubling more than once per year, a global 100% surtax on GPUs and AI ASICs seems like it would be a difference of only months to AGI timelines.
What is your source for the claim that effective compute for AI is doubling more than once per year? And do you mean effective compute in the largest training runs, or effective compute available in the world more generally?
“Effective compute” is the combination of hardware growth and algorithmic progress? If those are multiplicative rather than additive, slowing one of the factors may only accomplish little on its own, but maybe it could pave the way for more significant changes when you slow both at the same time?
Unfortunately, it seems hard to significantly slow algorithmic progress. I can think of changes to publishing behaviors (and improving security) and pausing research on scary models (for instance via safety evals). Maybe things like handicapping talent pools via changes to immigration policy, or encouraging capability researchers to do other work. But that’s about it.
Still, combining different measures could be promising if the effects are multiplicative rather than additive.
Edit: Ah, but I guess your point is that even a 100% tax on compute wouldn’t really change the slope of the compute growth curve – it would only move the curve rightward and delay a little. So we don’t get a multiplicative effect, unfortunately. We’d need to find an intervention that changes the steepness of the curve.
If the explicit goal of the regulation is to delay AI capabilities, and to implement that via taxes, seems like one could figure out something to make it longer. Also, a few months still seems quite helpful and would class as “substantially” in my mind.
I meant specifically regulations preventing broad deployment of TAI and replacing jobs. Regulation slowing down development of xrisk level AI would in fact slow down xrisk but I expect that by default this is much harder to make happen.