Can you say more about why you would not want an AI to relate to humans with “non-confrontational curiosity?
I expect the general idea is that we don’t want them to be too oriented towards second guessing our state of mind and trying to subtly shift it towards their idea of normality. A therapist and a con man have similar skillsets, and merely different goals. An AI too clever and a half about doing that would also be much harder to correct.
I expect the general idea is that we don’t want them to be too oriented towards second guessing our state of mind and trying to subtly shift it towards their idea of normality. A therapist and a con man have similar skillsets, and merely different goals. An AI too clever and a half about doing that would also be much harder to correct.