The problem with the specific scenario given, with experimental modification/duplication, rather than careful proof based modification, is that is liable to have the same problem that we have with creating systems this way. The copies might not do what the agent that created them want.
Which could lead to a splintering of the AI, and in-fighting over computational resources.
It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.
Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.
The problem with the specific scenario given, with experimental modification/duplication, rather than careful proof based modification, is that is liable to have the same problem that we have with creating systems this way. The copies might not do what the agent that created them want.
Which could lead to a splintering of the AI, and in-fighting over computational resources.
It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.
Of course, if it’s not, it could port itself to such if doing so is advantageous.
Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.
Er, I can’t really give a better rebuttal than this: http://www.singinst.org/upload/LOGI//levels/code.html
What point are you rebutting?
The idea that a greater portion of possible changes to a human-style mind are bad than changes of a equal magnitude to a Von Neumann-style mind.
Most random changes to a von Neumann-style mind would be bad as well.
Just a von-Neumann-style mind is unlikely to make the random mistakes that we can do, or at least that is Eliezer’s contention.
I can’t wait until there are uploads around to make questions like this empirical.