If you observe an action (A) that you judge so absurd that it casts doubt on the agent’s (G) rationality, then your confidence (C1) in G’s rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A’s absurdity should decrease.
So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G’s rationality enough to believe that there exist good reasons for A.
The only reason it feels like a problem is that human brains aren’t good at this. It sometimes helps to write it all down on paper, but mostly it’s just something to practice until it gets easier.
In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what’s your evidence? Are C1 and C2 at all calibrated to observed events?
If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem.
If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you’re unaware of (more or less as you’re doing now)… not because “you can’t rule out the possibility” but because it seems more likely than the alternatives. Again, no problem.
And the fact that other people don’t end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don’t have perfect trust in everyone’s perfect Bayesianness. Again, no problem… you simply disagree.
Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which generally makes me happier.
Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say “wow, I bet this guy only makes mistakes 10% of the time! That’s fantastic!”- but then when I see something that looks like a mistake, I’m not afraid to call it that. I just expect to see fewer of them.
There is no problem.
If you observe an action (A) that you judge so absurd that it casts doubt on the agent’s (G) rationality, then your confidence (C1) in G’s rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A’s absurdity should decrease.
So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G’s rationality enough to believe that there exist good reasons for A.
The only reason it feels like a problem is that human brains aren’t good at this. It sometimes helps to write it all down on paper, but mostly it’s just something to practice until it gets easier.
In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what’s your evidence? Are C1 and C2 at all calibrated to observed events?
If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem.
If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you’re unaware of (more or less as you’re doing now)… not because “you can’t rule out the possibility” but because it seems more likely than the alternatives. Again, no problem.
And the fact that other people don’t end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don’t have perfect trust in everyone’s perfect Bayesianness. Again, no problem… you simply disagree.
Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which generally makes me happier.
This comment is also relevant.
Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say “wow, I bet this guy only makes mistakes 10% of the time! That’s fantastic!”- but then when I see something that looks like a mistake, I’m not afraid to call it that. I just expect to see fewer of them.