Eliezer,
How are you going to be ‘sure’ that there is no landmine when you decide to step?
Are you going to have many ‘experts’ check your work before you’ll trust it? Who are these experts if you are occupying the highest intellectual orbital? How will you know they’re not YesMen?
Even if you can predict the full effects of your code mathematically (something I find somewhat doubtful, given that you will be creating something more intelligent than we are, and thus its actions will be by nature unpredictable to man), how can you be certain that the hardware it will run on will perform with the integrity you need it to?
If you have something that is changing itself towards ‘improvement,’ than won’t the dynamic nature of the program leave it open to errors that might have fatal consequences? I’m thinking of a digital version of genetic mutation in which your code is the DNA...
Like, lets say the superintelligence invents some sort of “Code shuffling” mechanism for itself whereby it can generate many new useful functions in an expedited evolutionary manner (Like we generate antibodies) but in the process accidentally does something disasterous.
The argument, ‘it would be too intelligent and well intentioned to do that, doesn’t seem to cut it, because the machine will be evolving from something of below human intelligence into something above, and it is not certain what types of intelligence it will evolve faster, or what trajectory this ‘general’ intelligence will take. If we knew that, then we could program the intelligence directly and not need to make it recursively self-improving.
Ohhhh… oh so many things I could substitute for the word ‘Zebra’....