A prerequisite for an AI FOOMing is the ability to apply its intelligence to improving its source code so that the resulting program is more intelligent still.
We have an existence proof that human-level intelligence does not automatically give a mind the ability to understand source code and make changes to that source code which reliably have the intended effect. Perhaps some higher level of intelligence automatically grants that ability, but proving that would be non-trivial.
If your unpacking of “sufficiently smart” is such that any sufficiently smart AI has not only the ability to think at the same level as a human, but also to reliably and safely make changes to its own source code, such that these changes improve its intelligence, then a FOOM appears inevitable, and we have (via the AI Box experiments) an existence proof that human-level intelligence is sufficient for an AI to manipulate humans into giving it unrestricted access to computing resources.
But that meaning of “sufficiently smart” begs the question of what it would take for an AI to have these abilities.
One of the insights developed by Eliezer is the notion of a “codic cortex”, a sensory modality designed to equip an AI with the means to make reliable inferences about source code in much the same way that humans make reliable inferences about the properties of visible objects, sounds, and so on.
I am prepared to accept that an AI equipped with a “codic cortex” would inevitably go FOOM, but (going on what I’ve read so far) that notion is at present more of a metaphor than a fully-developed plan.
Are you a programmer yourself?
A prerequisite for an AI FOOMing is the ability to apply its intelligence to improving its source code so that the resulting program is more intelligent still.
We have an existence proof that human-level intelligence does not automatically give a mind the ability to understand source code and make changes to that source code which reliably have the intended effect. Perhaps some higher level of intelligence automatically grants that ability, but proving that would be non-trivial.
If your unpacking of “sufficiently smart” is such that any sufficiently smart AI has not only the ability to think at the same level as a human, but also to reliably and safely make changes to its own source code, such that these changes improve its intelligence, then a FOOM appears inevitable, and we have (via the AI Box experiments) an existence proof that human-level intelligence is sufficient for an AI to manipulate humans into giving it unrestricted access to computing resources.
But that meaning of “sufficiently smart” begs the question of what it would take for an AI to have these abilities.
One of the insights developed by Eliezer is the notion of a “codic cortex”, a sensory modality designed to equip an AI with the means to make reliable inferences about source code in much the same way that humans make reliable inferences about the properties of visible objects, sounds, and so on.
I am prepared to accept that an AI equipped with a “codic cortex” would inevitably go FOOM, but (going on what I’ve read so far) that notion is at present more of a metaphor than a fully-developed plan.