Or: Lying To Yourself About Changing Your Mind
Someone writes a hot take on Twitter, and you see red. “These morons don’t know what they’re talking about!” you think as you rain down keystrokes forming a reply that will sweep them away. Leaning back, you’re glad of a job well done. One more idiot on the internet slain. 5 minutes later, you see a response. Perhaps the idiot has written some feebleminded response, unaware that they’ve been skewered by the pointy end of your wit. “Now let’s see- oh. Dang. That’s a pretty good point. Hm.” You’ve just been destroyed on the internet.
What do you do in this scenario? One option is to cope, but that’s undignified for a genius like yourself, though you’d never dream of making intelligence a part of your identity. Leave that to Mensa. Another is to quibble over details, hiding your 99 feet of error behind their 1 foot of error. Perhaps length-wise? But no, that’s also undignified for a proud rationalist like yourself. Besides, each rebuttal that springs to mind is cast down as cope by your unruly unconscious.
If you’re like me, at this point you’ll be at risk fooling yourself into thinking you’ve updated.
Personally, I might concede that OK, the other person is right as I twist in my seat. But only grudgingly. Noticing my reluctance to update to the obviously correct point of view, I consider that I may not have given my true rejection yet. Focusing, I notice that I’ve got another objection. I listen to it and marshal the Objectively Correct arguments to bring that poor part of me into the light. Satisfied at the lack of any real response, I rejoice at being on the Right Side. Update successful.
What just happened was a Fake Update. Like a belief in belief, I can believe I updated. Yet tomorrow, I’ll go back to the position I had before the conversation. This is in spite of the fact that I was actually wrong.
Like belief in belief, Fake Updates occur when you have a non-epistemic incentive to update. Such incentives come in many guises.
Perhaps it’s because your actual rebuttal to the guy you’re arguing with sounds dumb, so you never raise it and you never actually get the chance to be shown you’re wrong. Perhaps it’s because your belief is a load-bearing part of your identity and risking it feels like you’re risking yourself. Perhaps it’s that you didn’t actually understand what someone else said but they’re too high status so you just nod your head and act like you’ve updated to please them[1], fooling even yourself.
Whatever the case, something is stopping you from propagating the info you’ve received into the rest of your beliefs. Like looking for a glass of water and seeing it’s on your desk, updates should be effortless. When they’re not, you can feel it. Learn the texture of Fake Updates. The sense that accompanies every “haha, yeah”, the tugging of your collar or twisting of your arms. Once you can feel them, you can do the dignified thing and say “I don’t believe this”
- ^
It won’t please them.
I’m pretty sure there was a post about why effortless updates are a bad idea. I couldn’t find the exact post, but I suggest remembering epistemic learned helplessness. The correct thing to do when faced with an argument that seems to disprove one of your beliefs is usually to nod politely and ignore it. After you have 1) thought about it for a while and 2) observed experts changing their mind on it, you may then actually consider adopting it.
Also, remember Yudkowsky’s AI box experiment where the experimenter playing an AI in a box can convince someone to let it out. The “pretty good point” may be the equivalent of Eliezer convincing you to let the AI out because he is convincing, not because he actually has a pretty good point. (This goes double for actual AIs.)
Opus was harranguing me about the effortless updates point, as it was the shakiest claim in the essay. Anyway, I stand by it, with the caveat that once you fully understand a point, an update should be effortless. If it isn’t, and if you’re reluctant to do the update s.t. your updates don’t look like a random walk, it’s likely that you’re being pressured into doing one anyway. This is what I’d call a fake update.
As to the example you gave, well: if you need to rely on expert consensus, then you don’t actually understand the point being made. And if you need to think about if for a while, again, you don’t understand the point being made.
This is, I think, a fairly weak position because I’m ignoring all of the sweat that goes into understanding a concept, which in and of itself can require building a lot of new cognitive structures and links between thoughts. Calling those changes updates seems sensible to me, and now I think I’ve argued myself out of my initial claim. Some updates are effortless, perhaps even most, but many aren’t.
The basic problem is epistemic learned helplessness. You know that your own reasoning process isn’t perfect and that there are arguments that are wrong, but which you can’t detect as wrong. In other words, “once you fully understand a point” is a state that you, an imperfect reasoner, can’t know you are in. Advice on what to do in that state is therefore useless. You need advice on what to do when you seem to be in that state, which may be different than advice on what to do if you know for sure that you’re in that state.