I’m not fully clear on the concrete difference between “assume good faith” and “stick to the object level”, as instrumental strategies. I’ll use one of Zach’s examples, written as a dialog. Alice is sticking to the object level. I’m imagining that she is a Vulcan and her opinions of Zach’s intentions are inscrutable except for the occasional raised eyebrow.
Alice: “Your latest reply seems to contradict something you said earlier.”
Zach: “Look over there, a distraction!”
Alice: “I don’t understand how the distraction is relevant to resolving the inconsistency in your statements that I raised.”
Here is my attempt at the same conversation with Bob, who aggressively assumes good faith.
Bob: “Is there a contradiction between your latest reply and this thing you said earlier?”
Zach: “Look over there, a distraction!”
Bob: “I’d love to talk about that later, but right now I’m still confused about what you were saying earlier, can you help me?”
Is that the type of thing? Bob is talking as if Zach has a heart of gold and the purest of intentions, whereas Alice is talking as if Zach is a non-sentient text generator. In both cases admitting that you’re doing that isn’t part of the game. Both of them are modeling Zach’s intentions, at least subconsciously. Both are strategically choosing not to leak their model of Zach to Zach at this stage of the conversation. Both are capable of switching to a different strategy as needed. What are the reasons to prefer Alice’s approach to Bob’s?
To be clear, I completely agree that assuming good faith is a disaster as an epistemic strategy. As well as the reasons mentioned above, brains are evolutionarily adapted to detect hidden motives and generate emotions accordingly. Trying to fight that is unwise.
I’m not fully clear on the concrete difference between “assume good faith” and “stick to the object level”, as instrumental strategies. I’ll use one of Zach’s examples, written as a dialog. Alice is sticking to the object level. I’m imagining that she is a Vulcan and her opinions of Zach’s intentions are inscrutable except for the occasional raised eyebrow.
Alice: “Your latest reply seems to contradict something you said earlier.”
Zach: “Look over there, a distraction!”
Alice: “I don’t understand how the distraction is relevant to resolving the inconsistency in your statements that I raised.”
Here is my attempt at the same conversation with Bob, who aggressively assumes good faith.
Bob: “Is there a contradiction between your latest reply and this thing you said earlier?”
Zach: “Look over there, a distraction!”
Bob: “I’d love to talk about that later, but right now I’m still confused about what you were saying earlier, can you help me?”
Is that the type of thing? Bob is talking as if Zach has a heart of gold and the purest of intentions, whereas Alice is talking as if Zach is a non-sentient text generator. In both cases admitting that you’re doing that isn’t part of the game. Both of them are modeling Zach’s intentions, at least subconsciously. Both are strategically choosing not to leak their model of Zach to Zach at this stage of the conversation. Both are capable of switching to a different strategy as needed. What are the reasons to prefer Alice’s approach to Bob’s?
To be clear, I completely agree that assuming good faith is a disaster as an epistemic strategy. As well as the reasons mentioned above, brains are evolutionarily adapted to detect hidden motives and generate emotions accordingly. Trying to fight that is unwise.