If we talk about the quote at the beginning, then its final conclusion seems to me not entirely correct. What the vast majority of people mean by “emotions” is different from “rational functions of emotions”. Yudkowsky in his essay on emotions is playing with words, using terms that are not quite traditional. Fear is not “I calmly foresee the negative consequences of some actions and therefore I avoid them.” Fear is rather “The thought of the possibility of some negative events makes me tremble, I have useless reflections, I have cognitive distortions that make me unreasonably overestimate (or, conversely, sometimes underestimate) the probability of these negative events, I begin to feel aggression towards sources of information about the possibility of these negative events (and much more in the same spirit).” Emotions in the human understanding are not at all the same as the rational influence of basic values on behavior in Yudkowsky’s interpretation. Emotions in the human understanding are, first of all, a mad hodgepodge of cognitive distortions. Therefore, when Yudkowsky says something like “Why do you think that AI will be emotionless? After all, it will have values!”, I even see some manipulation here. Well, yes, AI will have values influencing behavior. But at the same time, it will not be nervous, freak out, or experience the halo effect. This is absolutely not what a normal ordinary person would call emotions. In fact, here Yudkowsky’s imaginary opponents are closer to the truth, depicting AI as dispassionate and emotionless (because the uniform influence of values on behavior without peaks and troughs should look exactly like that). Does it matter? It depends. When communicating with ordinary people, we are used to using their and our cognitive distortions. When talking to a person, you know that you can suddenly change the topic of conversation and influence the emotions of the interlocutor. In communication with AI (powerful enough and having managed to modify itself well), all this will not work. It is like trying to outsmart God. Therefore, it seems to me that a person who tunes himself to thought “I am communicating with an impassive inhuman being” will in some sense be closer to the truth (at least, will have fewer false subconscious hopes) than a person who tunes himself to thought “I am communicating with the same living emotional sympathetic subject that I am.” But this is context-dependent.
If we talk about the quote at the beginning, then its final conclusion seems to me not entirely correct.
What the vast majority of people mean by “emotions” is different from “rational functions of emotions”. Yudkowsky in his essay on emotions is playing with words, using terms that are not quite traditional.
Fear is not “I calmly foresee the negative consequences of some actions and therefore I avoid them.”
Fear is rather “The thought of the possibility of some negative events makes me tremble, I have useless reflections, I have cognitive distortions that make me unreasonably overestimate (or, conversely, sometimes underestimate) the probability of these negative events, I begin to feel aggression towards sources of information about the possibility of these negative events (and much more in the same spirit).”
Emotions in the human understanding are not at all the same as the rational influence of basic values on behavior in Yudkowsky’s interpretation.
Emotions in the human understanding are, first of all, a mad hodgepodge of cognitive distortions.
Therefore, when Yudkowsky says something like “Why do you think that AI will be emotionless? After all, it will have values!”, I even see some manipulation here. Well, yes, AI will have values influencing behavior. But at the same time, it will not be nervous, freak out, or experience the halo effect. This is absolutely not what a normal ordinary person would call emotions. In fact, here Yudkowsky’s imaginary opponents are closer to the truth, depicting AI as dispassionate and emotionless (because the uniform influence of values on behavior without peaks and troughs should look exactly like that).
Does it matter?
It depends. When communicating with ordinary people, we are used to using their and our cognitive distortions. When talking to a person, you know that you can suddenly change the topic of conversation and influence the emotions of the interlocutor. In communication with AI (powerful enough and having managed to modify itself well), all this will not work. It is like trying to outsmart God.
Therefore, it seems to me that a person who tunes himself to thought “I am communicating with an impassive inhuman being” will in some sense be closer to the truth (at least, will have fewer false subconscious hopes) than a person who tunes himself to thought “I am communicating with the same living emotional sympathetic subject that I am.” But this is context-dependent.