We can’t just “decide not to build AGI” because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world. Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit—it does not lift it, unless computer hardware and computer software progress are both brought to complete severe halts across the whole Earth. The current state of this cooperation to have every big actor refrain from doing the stupid thing, is that at present some large actors with a lot of researchers and computing power are led by people who vocally disdain all talk of AGI safety (eg Facebook AI Research). Note that needing to solve AGI alignment only within a time limit, but with unlimited safe retries for rapid experimentation on the full-powered system; or only on the first critical try, but with an unlimited time bound; would both be terrifically humanity-threatening challenges by historical standards individually.
Note in particular this part:
unless computer hardware and computer software progress are both brought to complete severe halts across the whole Earth.
Research on computer hardware takes lots and lots of money and big buildings, right? Ie it’s not the type of thing someone can do in their basement? If so, it seems like, at least in theory, it can be regulated by governments, assuming they wanted to make a real effort at it. Is that true? If so, it seems like a point that is worth establishing.
(From there, the question of course becomes whether we can convince governments to do so. If that is impossible then I guess it doesn’t matter if it’s possible for them to regulate. Still, I feel like it is helpful to think about the two questions separately.)
I think there are a bunch of political problems with regulating all computer hardware progress enough to cause it to totally cease. Think how crucial computers are to the modern world. Really a lot of people will be upset if we stop building them, or stop making better ones. And if one country stops, that just creates an incentive for other countries to step in to dominate this industry. And even aside from that, I don’t think that there’s any regulator in the US at least that has enough authority and internal competence to be able to pull this off. More likely, it becomes a politicized issue. (Compare to the much more straightforward and much more empirically-grounded regulation of instituting a carbon tax for climate change. This is a simple idea, that would help a lot, and is much less costly to the world than halting hardware progress. But instead of being universally adopted, it’s a political issue that different political factions support or oppose.)
But even if we could, this doesn’t solve the problem in a long term way. You need to also halt software progress. Otherwise we’ll continue to tinker with AI designs until we get to some that can run efficiently on 2020′s computers (or 1990′s computers, for that matter).
So in the long run, the only thing in this class that would straight up prevent AGI from being developed is a global, strictly enforced ban on computers. Which seems...not even remotely on the table, on the basis of arguments that are as theoretical as those for AI risk.
There might be some plans in this class that help, by delaying the date of AGI. But that just buys time for some other solution to do the real legwork.
The question here is whether they are capable of regulating it assuming that they are convinced and want to regulate it. It is possible that that it is so incredibly unlikely that they can be convinced that it isn’t worth talking about the question of whether they’re capable of it. I don’t suspect that to be the case, but wouldn’t be surprised if I were wrong.
From there, the question of course becomes whether we can convince governments to do so. If that is impossible then I guess it doesn’t matter
Unfortuantely we cannot in fact convince governments to shut down AWS & crew. There are intermediary positions I think are worthwhile but unfortunately ending all AI research is outside the overton window for now.
Note in particular this part:
Research on computer hardware takes lots and lots of money and big buildings, right? Ie it’s not the type of thing someone can do in their basement? If so, it seems like, at least in theory, it can be regulated by governments, assuming they wanted to make a real effort at it. Is that true? If so, it seems like a point that is worth establishing.
(From there, the question of course becomes whether we can convince governments to do so. If that is impossible then I guess it doesn’t matter if it’s possible for them to regulate. Still, I feel like it is helpful to think about the two questions separately.)
I think there are a bunch of political problems with regulating all computer hardware progress enough to cause it to totally cease. Think how crucial computers are to the modern world. Really a lot of people will be upset if we stop building them, or stop making better ones. And if one country stops, that just creates an incentive for other countries to step in to dominate this industry. And even aside from that, I don’t think that there’s any regulator in the US at least that has enough authority and internal competence to be able to pull this off. More likely, it becomes a politicized issue. (Compare to the much more straightforward and much more empirically-grounded regulation of instituting a carbon tax for climate change. This is a simple idea, that would help a lot, and is much less costly to the world than halting hardware progress. But instead of being universally adopted, it’s a political issue that different political factions support or oppose.)
But even if we could, this doesn’t solve the problem in a long term way. You need to also halt software progress. Otherwise we’ll continue to tinker with AI designs until we get to some that can run efficiently on 2020′s computers (or 1990′s computers, for that matter).
So in the long run, the only thing in this class that would straight up prevent AGI from being developed is a global, strictly enforced ban on computers. Which seems...not even remotely on the table, on the basis of arguments that are as theoretical as those for AI risk.
There might be some plans in this class that help, by delaying the date of AGI. But that just buys time for some other solution to do the real legwork.
The question here is whether they are capable of regulating it assuming that they are convinced and want to regulate it. It is possible that that it is so incredibly unlikely that they can be convinced that it isn’t worth talking about the question of whether they’re capable of it. I don’t suspect that to be the case, but wouldn’t be surprised if I were wrong.
Unfortuantely we cannot in fact convince governments to shut down AWS & crew. There are intermediary positions I think are worthwhile but unfortunately ending all AI research is outside the overton window for now.