More importantly from my own perspective: Some elements of human therapeutic practice, as described above, are not how I would want AIs relating to humans. Eg:
“Non-Confrontational Curiosity: Gauges the use of gentle, open-ended questioning to explore the user’s experience and create space for alternative perspectives without direct confrontation.”
Can you say more about why you would not want an AI to relate to humans with “non-confrontational curiosity?”
It appears to me like your comment is arguing against a situation in which the AI system has a belief about what the user should think/do, but instead of saying that directly, they try to subtly manipulate the user into having this belief.
I read the “non-confrontational curiosity” approach as a different situation—one in which the AI system does not necessarily have a belief about what the user should think/do, and just asks some open-ended reflection questions in an attempt to get the user to crystallize their own views (without a target end state in mind).
I think many therapists who use the “non-confrontational curiosity” approach would say, for example, that they are usually not trying to get the client to a predetermined outcome but rather are genuinely trying to help the client explore their own feelings/thoughts on a topic and don’t have any stake in getting to a particular end destination. (Note that I’m thinking of therapists who use this style with people who are not in extreme distress—EG members of the general population, mild depression/anxiety/stress. This model may not be appropriate for people with more severe issues—EG severe psychosis.)
Can you say more about why you would not want an AI to relate to humans with “non-confrontational curiosity?
I expect the general idea is that we don’t want them to be too oriented towards second guessing our state of mind and trying to subtly shift it towards their idea of normality. A therapist and a con man have similar skillsets, and merely different goals. An AI too clever and a half about doing that would also be much harder to correct.
Can you say more about why you would not want an AI to relate to humans with “non-confrontational curiosity?”
It appears to me like your comment is arguing against a situation in which the AI system has a belief about what the user should think/do, but instead of saying that directly, they try to subtly manipulate the user into having this belief.
I read the “non-confrontational curiosity” approach as a different situation—one in which the AI system does not necessarily have a belief about what the user should think/do, and just asks some open-ended reflection questions in an attempt to get the user to crystallize their own views (without a target end state in mind).
I think many therapists who use the “non-confrontational curiosity” approach would say, for example, that they are usually not trying to get the client to a predetermined outcome but rather are genuinely trying to help the client explore their own feelings/thoughts on a topic and don’t have any stake in getting to a particular end destination. (Note that I’m thinking of therapists who use this style with people who are not in extreme distress—EG members of the general population, mild depression/anxiety/stress. This model may not be appropriate for people with more severe issues—EG severe psychosis.)
I expect the general idea is that we don’t want them to be too oriented towards second guessing our state of mind and trying to subtly shift it towards their idea of normality. A therapist and a con man have similar skillsets, and merely different goals. An AI too clever and a half about doing that would also be much harder to correct.