I’m curious if you have any thoughts on the effect regulations will have on AI timelines. To have a transformative effect, AI would likely need to automate many forms of management, which involves making a large variety of decisions without the approval of other humans. The obvious effect of deploying these technologies will therefore be to radically upend our society and way of life, taking control away from humans and putting it in the hands of almost alien decision-makers. Will bureaucrats, politicians, voters, and ethics committees simply stand idly by while the tech industry takes over our civilization like this?
On the one hand, it is true that cars, airplanes, electricity, and computers were all introduced with relatively few regulations. These technologies went on to change our lives greatly in the last century and a half. On the other hand, nuclear power, human cloning, genetic engineering of humans, and military weapons each have a comparable potential to change our lives, and yet are subject to tight regulations, both formally, as the result of government-enforced laws, and informally, as engineers regularly refuse to work on these technologies indiscriminately, fearing backlash from the public.
One objection is that it is too difficult to slow down AI progress. I don’t buy this argument.
A central assumption of the Bio Anchors model, and all hardware-based models of AI progress more generally, is that getting access to large amounts of computation is a key constraint to AI development. Semiconductor fabrication plants are easily controllable by national governments and require multi-billion dollar upfront investments, which can hardly evade the oversight of a dedicated international task force.
We saw in 2020 that, if threats are big enough, governments have no problem taking unprecedented action, quickly enacting sweeping regulations of our social and business life. If anything, a global limit on manufacturing a particular technology enjoys even more precedent than, for example, locking down over half of the world’s population under some sort of stay-at-home order.
Another argument states that the incentives to make fast AI progress are simply too strong: first mover advantages dictate that anyone who creates AGI will take over the world. Therefore, we should expect investments to accelerate dramatically, not slow down, as we approach AGI. This argument has some merit, and I find it relatively plausible. At the same time, it relies on a very pessimistic view of international coordination that I find questionable. A similar first-mover advantage was also observed for nuclear weapons, prompting Bertrand Russell to go as far as saying that only a world government could possibly deter nations from developing and using nuclear weapons. Yet, I do not think this prediction was borne out.
Finally, it is possible that the timeline you state here is conditioned on no coordinated slowdowns. I sometimes see people making this assumption explicit, and in your report you state that you did not attempt to model “the possibility of exogenous events halting the normal progress of AI research”. At the same time, if regulation ends up mattering a lot—say, it delays progress by 20 years—then all the conditional timelines will look pretty bad in hindsight, as they will have ended up omitting one of the biggest, most determinative factors of all. (Of course, it’s not misleading if you just state upfront that it’s a conditional prediction).
To take the pessimistic side on AI, I see some reasons why AI probably won’t be regulated in a way that matters:
I suspect the no fire alarm hypothesis is roughly correct, in that by and large, people won’t react until it’s too late. My biggest reason comes from the AI effect, where people start downplaying the intelligence it has, which is dangerous because people don’t react to warning shots like GPT-3 or AlphaFold 2, and updates me to thinking that people won’t seriously start calling for regulation until AGI is actually here, and that’s far too late. We got a fire alarm for nukes in Hiroshima, which was an instance of a lucky fire alarm before many nukes or nuclear power plants were made, and we can’t rely on luck saving us again.
Politicization. The COVID-19 response worries me much more than you, and it’s positives only outweighed it’s negatives only because of the fact that there wasn’t any X-risk. In particular, the fact that there was a strong response actually decayed pretty fast, and in our world virtually everything is politicized into a culture war as soon as it actually impacts people’s lives. A lot of the competence of say, handling nukes or genetic engineering is that politics didn’t use to eat everything, thus no one had too much motivation to defect. Now, if they had to deal with nukes or genetic engineering with our politics, at least 40% of the US population would support getting these technologies solely to destroy the other side.
Speaking of that far too late thing, most technologies that got successfully regulated either had everyone panicking like nuclear reactor radiation or wasn’t very developed like Human genetic engineering/cloning.
Finally, no one can have it and AGI itself is a threat thanks to inner optimizer concerns. So the solution of having government control it is exactly unworkable, since they themselves have large incentives to get AGI ala nukes and have little reason not to.
Politicization. The COVID-19 response worries me much more than you, and it’s positives only outweighed it’s negatives only because of the fact that there wasn’t any X-risk. In particular, the fact that there was a strong response actually decayed pretty fast, and in our world virtually everything is politicized into a culture war as soon as it actually impacts people’s lives.
Note that I’m simply pointing out that people will probably try to regulate AI, and that this could delay AI timelines. I’m not proposing that we should be optimistic about regulation. Indeed, I’m quite pessimistic about heavy-handed government regulation of AI, but for reasons I’m not going to go into here.
Separately, the reason why the Covid-19 response decayed quickly likely had little to do with politicization, given that the pandemic response decayed in every nation in the world, with the exception of China. My guess is that, historically, regulations on manufacturing particular technologies have not decayed quite so quickly.
I’m curious if you have any thoughts on the effect regulations will have on AI timelines. To have a transformative effect, AI would likely need to automate many forms of management, which involves making a large variety of decisions without the approval of other humans. The obvious effect of deploying these technologies will therefore be to radically upend our society and way of life, taking control away from humans and putting it in the hands of almost alien decision-makers. Will bureaucrats, politicians, voters, and ethics committees simply stand idly by while the tech industry takes over our civilization like this?
On the one hand, it is true that cars, airplanes, electricity, and computers were all introduced with relatively few regulations. These technologies went on to change our lives greatly in the last century and a half. On the other hand, nuclear power, human cloning, genetic engineering of humans, and military weapons each have a comparable potential to change our lives, and yet are subject to tight regulations, both formally, as the result of government-enforced laws, and informally, as engineers regularly refuse to work on these technologies indiscriminately, fearing backlash from the public.
One objection is that it is too difficult to slow down AI progress. I don’t buy this argument.
A central assumption of the Bio Anchors model, and all hardware-based models of AI progress more generally, is that getting access to large amounts of computation is a key constraint to AI development. Semiconductor fabrication plants are easily controllable by national governments and require multi-billion dollar upfront investments, which can hardly evade the oversight of a dedicated international task force.
We saw in 2020 that, if threats are big enough, governments have no problem taking unprecedented action, quickly enacting sweeping regulations of our social and business life. If anything, a global limit on manufacturing a particular technology enjoys even more precedent than, for example, locking down over half of the world’s population under some sort of stay-at-home order.
Another argument states that the incentives to make fast AI progress are simply too strong: first mover advantages dictate that anyone who creates AGI will take over the world. Therefore, we should expect investments to accelerate dramatically, not slow down, as we approach AGI. This argument has some merit, and I find it relatively plausible. At the same time, it relies on a very pessimistic view of international coordination that I find questionable. A similar first-mover advantage was also observed for nuclear weapons, prompting Bertrand Russell to go as far as saying that only a world government could possibly deter nations from developing and using nuclear weapons. Yet, I do not think this prediction was borne out.
Finally, it is possible that the timeline you state here is conditioned on no coordinated slowdowns. I sometimes see people making this assumption explicit, and in your report you state that you did not attempt to model “the possibility of exogenous events halting the normal progress of AI research”. At the same time, if regulation ends up mattering a lot—say, it delays progress by 20 years—then all the conditional timelines will look pretty bad in hindsight, as they will have ended up omitting one of the biggest, most determinative factors of all. (Of course, it’s not misleading if you just state upfront that it’s a conditional prediction).
To take the pessimistic side on AI, I see some reasons why AI probably won’t be regulated in a way that matters:
I suspect the no fire alarm hypothesis is roughly correct, in that by and large, people won’t react until it’s too late. My biggest reason comes from the AI effect, where people start downplaying the intelligence it has, which is dangerous because people don’t react to warning shots like GPT-3 or AlphaFold 2, and updates me to thinking that people won’t seriously start calling for regulation until AGI is actually here, and that’s far too late. We got a fire alarm for nukes in Hiroshima, which was an instance of a lucky fire alarm before many nukes or nuclear power plants were made, and we can’t rely on luck saving us again.
Politicization. The COVID-19 response worries me much more than you, and it’s positives only outweighed it’s negatives only because of the fact that there wasn’t any X-risk. In particular, the fact that there was a strong response actually decayed pretty fast, and in our world virtually everything is politicized into a culture war as soon as it actually impacts people’s lives. A lot of the competence of say, handling nukes or genetic engineering is that politics didn’t use to eat everything, thus no one had too much motivation to defect. Now, if they had to deal with nukes or genetic engineering with our politics, at least 40% of the US population would support getting these technologies solely to destroy the other side.
Speaking of that far too late thing, most technologies that got successfully regulated either had everyone panicking like nuclear reactor radiation or wasn’t very developed like Human genetic engineering/cloning.
Finally, no one can have it and AGI itself is a threat thanks to inner optimizer concerns. So the solution of having government control it is exactly unworkable, since they themselves have large incentives to get AGI ala nukes and have little reason not to.
Note that I’m simply pointing out that people will probably try to regulate AI, and that this could delay AI timelines. I’m not proposing that we should be optimistic about regulation. Indeed, I’m quite pessimistic about heavy-handed government regulation of AI, but for reasons I’m not going to go into here.
Separately, the reason why the Covid-19 response decayed quickly likely had little to do with politicization, given that the pandemic response decayed in every nation in the world, with the exception of China. My guess is that, historically, regulations on manufacturing particular technologies have not decayed quite so quickly.