Scott once had a post about how it’s hard to get advice only to the people who need it.
Sam Bankman-Fried may have lied too much (although the real problem was probably goals that conflict with ours) but the essay here is aimed at the typical LW geek, and LW geeks tend not to lie enough.
I’m not convinced SBF had conflicting goals, although it’s hard to know. But more importantly, I don’t agree rationalists “tend not to lie enough”. I’m no Kantian, to be clear, but I believe rationalists ought to aspire to a higher standard of truthtelling than the average person, even if there are some downsides to that.
What would you say to the suggestion that rationalists ought to aspire to have the “optimal” standard of truthtelling, and that standard might well be higher or lower than what the average person is doing already (since there’s no obvious reason why they’d be biased in a particular direction), and that we’d need empirical observation and seriously looking at the payoffs that exist to figure out approximately how readily to lie is the correct readiness to lie?
since there’s no obvious reason why they’d be biased in a particular direction
No I’m saying there are obvious reasons why we’d be biased towards truthtelling. I mentioned “spread truth about AI risk” earlier, but also more generally one of our main goals is to get our map to match the territory as a collaborative community project. Lying makes that harder.
But to answer your question, it’s not wrong to do consequentialist analysis of lying. Again, I’m not Kantian, tell the guy here to randomly murder you whatever lie you want to survive. But I think there’s a lot of long-term consequences in less thought-experimenty cases that’d be tough to measure.
Scott once had a post about how it’s hard to get advice only to the people who need it.
Sam Bankman-Fried may have lied too much (although the real problem was probably goals that conflict with ours) but the essay here is aimed at the typical LW geek, and LW geeks tend not to lie enough.
I’m not convinced SBF had conflicting goals, although it’s hard to know. But more importantly, I don’t agree rationalists “tend not to lie enough”. I’m no Kantian, to be clear, but I believe rationalists ought to aspire to a higher standard of truthtelling than the average person, even if there are some downsides to that.
What would you say to the suggestion that rationalists ought to aspire to have the “optimal” standard of truthtelling, and that standard might well be higher or lower than what the average person is doing already (since there’s no obvious reason why they’d be biased in a particular direction), and that we’d need empirical observation and seriously looking at the payoffs that exist to figure out approximately how readily to lie is the correct readiness to lie?
No I’m saying there are obvious reasons why we’d be biased towards truthtelling. I mentioned “spread truth about AI risk” earlier, but also more generally one of our main goals is to get our map to match the territory as a collaborative community project. Lying makes that harder.
Besides sabotaging the community’s map, lying is dangerous to your own map too. As OP notes, to really lie effectively, you have to believe the lie. Well is it said, “If you once tell a lie, the truth is ever after your enemy.”
But to answer your question, it’s not wrong to do consequentialist analysis of lying. Again, I’m not Kantian, tell the guy here to randomly murder you whatever lie you want to survive. But I think there’s a lot of long-term consequences in less thought-experimenty cases that’d be tough to measure.