Well, yes, but behind the scenes you need a sensible symbolic representation of the world, with explicitly demarcated levels of abstraction. So, when the system is pathing between ‘the world now’ and ‘the world it wants to get to,’ the worlds in which it believes there are a lot of paperclips are in very different parts of state space than the worlds which contain the most paperclips, which is what it’s aiming for. Being unable to differentiate would be a bug in the seed AI, one which would not occur later if it did not originally exist.
Well, yes, but behind the scenes you need a sensible symbolic representation of the world, with explicitly demarcated levels of abstraction. So, when the system is pathing between ‘the world now’ and ‘the world it wants to get to,’ the worlds in which it believes there are a lot of paperclips are in very different parts of state space than the worlds which contain the most paperclips, which is what it’s aiming for. Being unable to differentiate would be a bug in the seed AI, one which would not occur later if it did not originally exist.