Fair enough. We can handwave a little and say that AI2 built by AI1 might be able to sense things and self-modify, but this offloading of the whole problem to AI1 is not really satisfying. We’d like to understand exactly how AIs should sense and self-modify, and right now we don’t.
But the new machine can’t self-modify. My point is about the limitations of cousin_it’s example. The machine has a completely accurate model of the world as input and uses an extremely inefficient algorithm to find a way to paperclip the world.
This isn’t quite an AGI. In particular, it doesn’t even take input from its surroundings.
Fair enough. We can handwave a little and say that AI2 built by AI1 might be able to sense things and self-modify, but this offloading of the whole problem to AI1 is not really satisfying. We’d like to understand exactly how AIs should sense and self-modify, and right now we don’t.
Let it build a machine that takes input from own surroundings.
But the new machine can’t self-modify. My point is about the limitations of cousin_it’s example. The machine has a completely accurate model of the world as input and uses an extremely inefficient algorithm to find a way to paperclip the world.
The second machine can be designed to build a third machine, based on the second machine’s observations.
Yes, but now the argument that you will converge to a paper clipper is much weaker.