What would you say to the suggestion that rationalists ought to aspire to have the “optimal” standard of truthtelling, and that standard might well be higher or lower than what the average person is doing already (since there’s no obvious reason why they’d be biased in a particular direction), and that we’d need empirical observation and seriously looking at the payoffs that exist to figure out approximately how readily to lie is the correct readiness to lie?
since there’s no obvious reason why they’d be biased in a particular direction
No I’m saying there are obvious reasons why we’d be biased towards truthtelling. I mentioned “spread truth about AI risk” earlier, but also more generally one of our main goals is to get our map to match the territory as a collaborative community project. Lying makes that harder.
But to answer your question, it’s not wrong to do consequentialist analysis of lying. Again, I’m not Kantian, tell the guy here to randomly murder you whatever lie you want to survive. But I think there’s a lot of long-term consequences in less thought-experimenty cases that’d be tough to measure.
What would you say to the suggestion that rationalists ought to aspire to have the “optimal” standard of truthtelling, and that standard might well be higher or lower than what the average person is doing already (since there’s no obvious reason why they’d be biased in a particular direction), and that we’d need empirical observation and seriously looking at the payoffs that exist to figure out approximately how readily to lie is the correct readiness to lie?
No I’m saying there are obvious reasons why we’d be biased towards truthtelling. I mentioned “spread truth about AI risk” earlier, but also more generally one of our main goals is to get our map to match the territory as a collaborative community project. Lying makes that harder.
Besides sabotaging the community’s map, lying is dangerous to your own map too. As OP notes, to really lie effectively, you have to believe the lie. Well is it said, “If you once tell a lie, the truth is ever after your enemy.”
But to answer your question, it’s not wrong to do consequentialist analysis of lying. Again, I’m not Kantian, tell the guy here to randomly murder you whatever lie you want to survive. But I think there’s a lot of long-term consequences in less thought-experimenty cases that’d be tough to measure.