“”Why didn’t you tell him the truth? Were you afraid?”
“I’m not afraid. I chose not to tell him, because I anticipated negative consequences if I did so.”
“What do you think ‘fear’ is, exactly?”″
The possibly amusing thing is that I read it as being someone who thought fear was shameful and was therefore lying, or possibly lying to themself about not feeling fear. I wasn’t expecting a discussion of p-zombies, though perhaps I should have been.
Does being strongly inhibited against knowing one’s own emotions make one more like a p-zombie?
As for social inhibitions against denying what other people say about their motives, it’s quite true that it can be socially corrosive to propose alternate motives for what people are doing, but I don’t think your proposal will make things much worse.
We’re already there. A lot of political discourse include assuming the worst about the other side’s motivations.
If we talk about the quote at the beginning, then its final conclusion seems to me not entirely correct. What the vast majority of people mean by “emotions” is different from “rational functions of emotions”. Yudkowsky in his essay on emotions is playing with words, using terms that are not quite traditional. Fear is not “I calmly foresee the negative consequences of some actions and therefore I avoid them.” Fear is rather “The thought of the possibility of some negative events makes me tremble, I have useless reflections, I have cognitive distortions that make me unreasonably overestimate (or, conversely, sometimes underestimate) the probability of these negative events, I begin to feel aggression towards sources of information about the possibility of these negative events (and much more in the same spirit).” Emotions in the human understanding are not at all the same as the rational influence of basic values on behavior in Yudkowsky’s interpretation. Emotions in the human understanding are, first of all, a mad hodgepodge of cognitive distortions. Therefore, when Yudkowsky says something like “Why do you think that AI will be emotionless? After all, it will have values!”, I even see some manipulation here. Well, yes, AI will have values influencing behavior. But at the same time, it will not be nervous, freak out, or experience the halo effect. This is absolutely not what a normal ordinary person would call emotions. In fact, here Yudkowsky’s imaginary opponents are closer to the truth, depicting AI as dispassionate and emotionless (because the uniform influence of values on behavior without peaks and troughs should look exactly like that). Does it matter? It depends. When communicating with ordinary people, we are used to using their and our cognitive distortions. When talking to a person, you know that you can suddenly change the topic of conversation and influence the emotions of the interlocutor. In communication with AI (powerful enough and having managed to modify itself well), all this will not work. It is like trying to outsmart God. Therefore, it seems to me that a person who tunes himself to thought “I am communicating with an impassive inhuman being” will in some sense be closer to the truth (at least, will have fewer false subconscious hopes) than a person who tunes himself to thought “I am communicating with the same living emotional sympathetic subject that I am.” But this is context-dependent.
“”Why didn’t you tell him the truth? Were you afraid?”
“I’m not afraid. I chose not to tell him, because I anticipated negative consequences if I did so.”
“What do you think ‘fear’ is, exactly?”″
The possibly amusing thing is that I read it as being someone who thought fear was shameful and was therefore lying, or possibly lying to themself about not feeling fear. I wasn’t expecting a discussion of p-zombies, though perhaps I should have been.
Does being strongly inhibited against knowing one’s own emotions make one more like a p-zombie?
As for social inhibitions against denying what other people say about their motives, it’s quite true that it can be socially corrosive to propose alternate motives for what people are doing, but I don’t think your proposal will make things much worse.
We’re already there. A lot of political discourse include assuming the worst about the other side’s motivations.
If we talk about the quote at the beginning, then its final conclusion seems to me not entirely correct.
What the vast majority of people mean by “emotions” is different from “rational functions of emotions”. Yudkowsky in his essay on emotions is playing with words, using terms that are not quite traditional.
Fear is not “I calmly foresee the negative consequences of some actions and therefore I avoid them.”
Fear is rather “The thought of the possibility of some negative events makes me tremble, I have useless reflections, I have cognitive distortions that make me unreasonably overestimate (or, conversely, sometimes underestimate) the probability of these negative events, I begin to feel aggression towards sources of information about the possibility of these negative events (and much more in the same spirit).”
Emotions in the human understanding are not at all the same as the rational influence of basic values on behavior in Yudkowsky’s interpretation.
Emotions in the human understanding are, first of all, a mad hodgepodge of cognitive distortions.
Therefore, when Yudkowsky says something like “Why do you think that AI will be emotionless? After all, it will have values!”, I even see some manipulation here. Well, yes, AI will have values influencing behavior. But at the same time, it will not be nervous, freak out, or experience the halo effect. This is absolutely not what a normal ordinary person would call emotions. In fact, here Yudkowsky’s imaginary opponents are closer to the truth, depicting AI as dispassionate and emotionless (because the uniform influence of values on behavior without peaks and troughs should look exactly like that).
Does it matter?
It depends. When communicating with ordinary people, we are used to using their and our cognitive distortions. When talking to a person, you know that you can suddenly change the topic of conversation and influence the emotions of the interlocutor. In communication with AI (powerful enough and having managed to modify itself well), all this will not work. It is like trying to outsmart God.
Therefore, it seems to me that a person who tunes himself to thought “I am communicating with an impassive inhuman being” will in some sense be closer to the truth (at least, will have fewer false subconscious hopes) than a person who tunes himself to thought “I am communicating with the same living emotional sympathetic subject that I am.” But this is context-dependent.