There’s no such thing as “convincing yourself” if you’re an agent, due to conservation of expected evidence. What people describe as “convincing yourself” is creating conditions under which a certain character-level belief is defensible to adopt, and then (character-level) adopting it. It’s an act, a simulacrum of having a belief.
(Narcissism is distinct from virtue ethics, which is the pursuit of actual good qualities rather than defensible character-level beliefs of having good qualities)
Since all three comments so far seem to have had the same basic objection, I’m going to reply to the parent.
It seems like the claim in your first paragraph is implicitly disjunctive: IF your beliefs are “about the world” (i.e. you’re modeling yourself as an agent with a truth-seeking epistemology), THEN “convincing yourself” isn’t a thing. So IF you’re “convincing yourself”, THEN the relevant “beliefs” aren’t a sincere attempt to represent the world.
But humans are badly modeled as single agents. Our behavior is rather the result of multiple agents acting together. It seems to me that some of those agents do try to convince others.
I don’t believe humans are badly modeled as single agents. Rather, they are single agents that have communicative and performative aspects to their cognition and behavior. See: The Elephant In The Brain, Player vs Character.
If you have strong reason to think “single agent communicating and doing performances” is a bad model, that would be interesting.
In this case, “convincing yourself” is clearly motivated. It doesn’t make sense as a random interaction between two subagents (otherwise, why aren’t people just as likely to try to convince themselves they have bad qualities?); whatever interaction there is has been orchestrated by some agentic process. Look at the result, and ask who wanted it.
The academic term for the Bayesian part is Bayesian Brain. Also see The Elephant In The Brain. The model itself (humans as singular agents doing performances) has some amount of empirical evidence (note, revealed preference models deductively imply performativity), and is (in my view) the most parsimonious. I haven’t seen empirical evidence specific to its application to narcissism, though.
There’s no such thing as “convincing yourself” if you’re an agent, due to conservation of expected evidence. What people describe as “convincing yourself” is creating conditions under which a certain character-level belief is defensible to adopt, and then (character-level) adopting it. It’s an act, a simulacrum of having a belief.
(Narcissism is distinct from virtue ethics, which is the pursuit of actual good qualities rather than defensible character-level beliefs of having good qualities)
Since all three comments so far seem to have had the same basic objection, I’m going to reply to the parent.
It seems like the claim in your first paragraph is implicitly disjunctive: IF your beliefs are “about the world” (i.e. you’re modeling yourself as an agent with a truth-seeking epistemology), THEN “convincing yourself” isn’t a thing. So IF you’re “convincing yourself”, THEN the relevant “beliefs” aren’t a sincere attempt to represent the world.
Is your claim that the actual way the brain works is close enough to Bayesian updating that this is true?
Yes.
But humans are badly modeled as single agents. Our behavior is rather the result of multiple agents acting together. It seems to me that some of those agents do try to convince others.
I don’t believe humans are badly modeled as single agents. Rather, they are single agents that have communicative and performative aspects to their cognition and behavior. See: The Elephant In The Brain, Player vs Character.
If you have strong reason to think “single agent communicating and doing performances” is a bad model, that would be interesting.
In this case, “convincing yourself” is clearly motivated. It doesn’t make sense as a random interaction between two subagents (otherwise, why aren’t people just as likely to try to convince themselves they have bad qualities?); whatever interaction there is has been orchestrated by some agentic process. Look at the result, and ask who wanted it.
Do you have any empirical evidence?
The academic term for the Bayesian part is Bayesian Brain. Also see The Elephant In The Brain. The model itself (humans as singular agents doing performances) has some amount of empirical evidence (note, revealed preference models deductively imply performativity), and is (in my view) the most parsimonious. I haven’t seen empirical evidence specific to its application to narcissism, though.