For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
The reason is probably functional; it’s definitely useful to distinguish between agents and agent–environment. Although, I think we forgot that it’s just a useful convention. I think we are screwed if the AI forgets that (sort of the current state) and it is superintelligent (not yet there). On the other hand, superintelligence might entail finding out non-dualism by itself.
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Well, cutting your nails is useful for the rest of the body; you don’t want to sacrifice everything for long nails. So, it is quite possible that we end up extinct unless we prove ourselves more useful to the overall system than nails. I do believe we have that in us, as it’s not a matter of quantity but of quality.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
The ‘I’ of the AI is an illusion as well, so it will probably have some empathy and compassion for us, or just be indifferent to that fact.
Now to answer some of the questions:
The reason is probably functional; it’s definitely useful to distinguish between agents and agent–environment. Although, I think we forgot that it’s just a useful convention. I think we are screwed if the AI forgets that (sort of the current state) and it is superintelligent (not yet there). On the other hand, superintelligence might entail finding out non-dualism by itself.
Well, cutting your nails is useful for the rest of the body; you don’t want to sacrifice everything for long nails. So, it is quite possible that we end up extinct unless we prove ourselves more useful to the overall system than nails. I do believe we have that in us, as it’s not a matter of quantity but of quality.
The ‘I’ of the AI is an illusion as well, so it will probably have some empathy and compassion for us, or just be indifferent to that fact.