My impression of Eliezer’s model of the intelligence explosion is that he believes b) is much harder than it looks. If you make developing strong AI illegal then the only people who end up developing it will be criminals, which is arguably worse, and it only takes one successful criminal organization developing strong AI to cause an unfriendly intelligence explosion. The general problem is that a) requires that one organization do one thing (namely, solving friendly AI) but b) requires that literally all organizations abstain from doing one thing (namely, building unfriendly AI).
CFCs and global warming don’t seem analogous to me. A better analogy to me is nuclear disarmament: it only takes one nuke to cause bad things to happen, and governments have a strong incentive to hold onto their nukes for military applications.
Cutting-edge chip manufacturing of the necessary sort? I believe we are lightyears away and things like 3D printing are irrelevant, and that it’s a little like asking how close we are to people running Manhattan Projects in their garage*; see my essay for details.
* Literally. The estimated budget for an upcoming Taiwanese chip fab is equal to some inflation-adjusted estimates of the Manhattan Project.
My notion of nanotech may have some fantasy elements—I think of nanotech as ultimately being able to put every atom where you want it, so long as the desired location is compatible with the atoms that are already there.
I realize that chip fabs keep getting more expensive, but is there any reason to think this can’t reverse?
Right, I see your point. But it depends on how close you think we are to AGI. Assuming we are sill quite far away, then if you manage to ban AI research early enough it seems unlikely that a rogue group will manage to do by itself all the rest of the progress, cut off from the broader scientific and engineering community.
The difference is that AI is relatively easy to do in secret. CFCs and nukes are much harder to hide.
Also, only AGI research is dangerous (or, more exactly, self-improving AI), but the other kinds are very useful. Since it’s hard to tell how far the danger is (and many don’t believe there’s a big danger), you’ll get a similar reaction to emission control proposals (i.e., some will refuse to stop, and it’s hard to convince a democratic country’s population to start a war over that; not to mention that a war risks making the AI danger moot by killing us all).
I agree that all kinds of AI research that are even close to AGI will have to be banned or strictly regulated, and that convincing all nations to ensure this is a hugely complicated political problem. (I don’t think it is more difficult than controlling carbon emissions, because of status quo bias: it is easier to convince someone to not do something new that sounds good, than to get them to stop doing something they view as good. But it is still hugely difficult, no questions about that.) It just seems to me even more difficult (and risky) to aim to solve flawlessly all the problems of FAI.
Note that the problem is not convincing countries not to do AI, the problem is convincing countries to police their population to prevent them from doing AI.
It’s much harder to hide a factory or a nuclear laboratory than to hide a bunch of geeks in a basement filled with computers. Note how bio-weapons are really scary not (just) because countries might (or are) developing them, but that it’s soon becoming easy enough for someone to do it in their kitchen.
My impression of Eliezer’s model of the intelligence explosion is that he believes b) is much harder than it looks. If you make developing strong AI illegal then the only people who end up developing it will be criminals, which is arguably worse, and it only takes one successful criminal organization developing strong AI to cause an unfriendly intelligence explosion. The general problem is that a) requires that one organization do one thing (namely, solving friendly AI) but b) requires that literally all organizations abstain from doing one thing (namely, building unfriendly AI).
CFCs and global warming don’t seem analogous to me. A better analogy to me is nuclear disarmament: it only takes one nuke to cause bad things to happen, and governments have a strong incentive to hold onto their nukes for military applications.
What would a law against developing strong AI look like?
I’ve suggested in the past that it would look something like a ban on chips more powerful than X teraflops/$.
How close are we to illicit chip manufacturing? On second thought, it might be easier to steal the chips.
Cutting-edge chip manufacturing of the necessary sort? I believe we are lightyears away and things like 3D printing are irrelevant, and that it’s a little like asking how close we are to people running Manhattan Projects in their garage*; see my essay for details.
* Literally. The estimated budget for an upcoming Taiwanese chip fab is equal to some inflation-adjusted estimates of the Manhattan Project.
My notion of nanotech may have some fantasy elements—I think of nanotech as ultimately being able to put every atom where you want it, so long as the desired location is compatible with the atoms that are already there.
I realize that chip fabs keep getting more expensive, but is there any reason to think this can’t reverse?
It’s hard to say what nanotech will ultimately pan out to be.
But in the absence of nanoassemblers, it’d be a very bad idea to bet against Moore’s second law.
Right, I see your point. But it depends on how close you think we are to AGI. Assuming we are sill quite far away, then if you manage to ban AI research early enough it seems unlikely that a rogue group will manage to do by itself all the rest of the progress, cut off from the broader scientific and engineering community.
The difference is that AI is relatively easy to do in secret. CFCs and nukes are much harder to hide.
Also, only AGI research is dangerous (or, more exactly, self-improving AI), but the other kinds are very useful. Since it’s hard to tell how far the danger is (and many don’t believe there’s a big danger), you’ll get a similar reaction to emission control proposals (i.e., some will refuse to stop, and it’s hard to convince a democratic country’s population to start a war over that; not to mention that a war risks making the AI danger moot by killing us all).
I agree that all kinds of AI research that are even close to AGI will have to be banned or strictly regulated, and that convincing all nations to ensure this is a hugely complicated political problem. (I don’t think it is more difficult than controlling carbon emissions, because of status quo bias: it is easier to convince someone to not do something new that sounds good, than to get them to stop doing something they view as good. But it is still hugely difficult, no questions about that.) It just seems to me even more difficult (and risky) to aim to solve flawlessly all the problems of FAI.
Note that the problem is not convincing countries not to do AI, the problem is convincing countries to police their population to prevent them from doing AI.
It’s much harder to hide a factory or a nuclear laboratory than to hide a bunch of geeks in a basement filled with computers. Note how bio-weapons are really scary not (just) because countries might (or are) developing them, but that it’s soon becoming easy enough for someone to do it in their kitchen.