If you are right about the first points, it is basically impossible to have a general intelligence with an absolute but limited goal like “maximizing the number of paperclips.” I agree that this is impossible in practice (even if it might be possible in principle, in some sense of “in principle”). I’ve argued that a few times here with somewhat similar reasoning
So what do we do about it? With intelligence as currently nebulous as it is, I think it makes sense to have a group of people working on ‘What if maximisers’ and also a group working on “What if satisficers”. We currently lack the latter.
If you are right about the first points, it is basically impossible to have a general intelligence with an absolute but limited goal like “maximizing the number of paperclips.” I agree that this is impossible in practice (even if it might be possible in principle, in some sense of “in principle”). I’ve argued that a few times here with somewhat similar reasoning
So what do we do about it? With intelligence as currently nebulous as it is, I think it makes sense to have a group of people working on ‘What if maximisers’ and also a group working on “What if satisficers”. We currently lack the latter.
Repeat for over-emphasis on singletons and again for over emphasis on GOFAI/logical AI.