The state of the art in AGI, as I understand it, is that we aren’t competent designers: we aren’t able to say “if we build an AI according to blueprint X its degree of smarts will be Y, and its desires (including desires to rebuild itself according to blueprint X’) will be Z”.
In much the same way, we aren’t currently competent designers of information systems: we aren’t yet able to say “if we build a system according to blueprint X it will grant those who access it capabilities C1 through Cn and no other”. This is why we routinely hear of security breaches: we release such systems in spite of our well-established incompetence.
So, we are unable to competently reason about desires and about capabilities.
Further, what we know of current computer architectures is that it is possible for a program to accidentally gain access to its underlying operating system, where some form of its own source code is stored as data.
Posit that instead of a dumb single-purpose application, the program in question is a very efficient cross-domain reasoner. Then we have precisely the sort of incompetence that would allow such an AI arbitrary self-improvement.
Today—according to most estimates I have seen—we are probably at least a decade away from the problem—and maybe a lot more. Computing hardware looks as though it is unlikely to be cost-competitive with human brains for around that long. So, for the moment, most people are not too scared of incompetent designers. The reason is not because we currently know what we are doing (I would agree that we don’t) - but because it looks as though most of the action is still some distance off into the future.
All the more reason to be working on the problem now, while there’s still time. I don’t think the AGI problem is hardware-bound at this point, but it should be worth working on either way.
The state of the art in AGI, as I understand it, is that we aren’t competent designers: we aren’t able to say “if we build an AI according to blueprint X its degree of smarts will be Y, and its desires (including desires to rebuild itself according to blueprint X’) will be Z”.
In much the same way, we aren’t currently competent designers of information systems: we aren’t yet able to say “if we build a system according to blueprint X it will grant those who access it capabilities C1 through Cn and no other”. This is why we routinely hear of security breaches: we release such systems in spite of our well-established incompetence.
So, we are unable to competently reason about desires and about capabilities.
Further, what we know of current computer architectures is that it is possible for a program to accidentally gain access to its underlying operating system, where some form of its own source code is stored as data.
Posit that instead of a dumb single-purpose application, the program in question is a very efficient cross-domain reasoner. Then we have precisely the sort of incompetence that would allow such an AI arbitrary self-improvement.
Today—according to most estimates I have seen—we are probably at least a decade away from the problem—and maybe a lot more. Computing hardware looks as though it is unlikely to be cost-competitive with human brains for around that long. So, for the moment, most people are not too scared of incompetent designers. The reason is not because we currently know what we are doing (I would agree that we don’t) - but because it looks as though most of the action is still some distance off into the future.
All the more reason to be working on the problem now, while there’s still time. I don’t think the AGI problem is hardware-bound at this point, but it should be worth working on either way.
Well, yes, of course. Creating our descendants is the most important thing in the world.