In addition to comments by Robin and Aron, I would also pointed out the possibility that longer the FOOM takes, larger the chance it is not local, regardless of security—somewhere else, there might be another FOOMing AI.
Now as I understand, some consider this situation even more dangerous, but it as well might create “take over” defence.
Another comment to FOOM scenario and this is sort of addition to Tim’s post:
“As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop—but before that there will have been much automated improvement of machines by machines—and after that there may still be human code reviews.”
Eliezer seems to spend a lot of time explaining what happens when “k > 1”—when AI intelligence surpases human and starts selfimproving. But I suspect that the phase 0.3 < k < 1 might be pretty long, maybe decades.
Also, moreover, by the time of FOOM, we should be able to use vast amounts of fast ‘subcritical’ AIs (+ weak AIs) as guardians of process. In fact, by that time, k < 1 AIs might play a pretty important role in world economy and security by that time and it does not take too much pattern recognition power to keep things at bay. (Well, in fact, I believe Eliezer proposes something similar in his thesis, except for locality issue).
“FOOM that takes two years”
In addition to comments by Robin and Aron, I would also pointed out the possibility that longer the FOOM takes, larger the chance it is not local, regardless of security—somewhere else, there might be another FOOMing AI.
Now as I understand, some consider this situation even more dangerous, but it as well might create “take over” defence.
Another comment to FOOM scenario and this is sort of addition to Tim’s post:
“As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop—but before that there will have been much automated improvement of machines by machines—and after that there may still be human code reviews.”
Eliezer seems to spend a lot of time explaining what happens when “k > 1”—when AI intelligence surpases human and starts selfimproving. But I suspect that the phase 0.3 < k < 1 might be pretty long, maybe decades.
Also, moreover, by the time of FOOM, we should be able to use vast amounts of fast ‘subcritical’ AIs (+ weak AIs) as guardians of process. In fact, by that time, k < 1 AIs might play a pretty important role in world economy and security by that time and it does not take too much pattern recognition power to keep things at bay. (Well, in fact, I believe Eliezer proposes something similar in his thesis, except for locality issue).