One implication of this is that we can develop heuristics for how bad different lies. The basic idea is that lies that likely to spread (especially if their effectiveness depends on them spreading) are particularly bad. Especially if they’re likely to spread within your movement (note lies used to increase support for your movement count here, since they’ll bring in new recruits who believe them).
Note: that using these heuristics we can see that the classic example used to justify lying: “There are no Jews in my basement” is in fact much less bad then Yvain’s example: “A man is more likely to be struck by lightning than be falsely accused of rape.”
First, I’m not sure what it means to say that “There are no Jews in my basement” is unlikely to spread. In a sense it’s a “pre-spread” lie, since the lack of Gestapo breaking down your doors implies that they are all already fairly confident of the falsehood; you’re just lying to decrease the probability that they’ll stop believing it.
Second, to add my own hypothetical: I can see an isomorphism (in terms of how the lie spreads) between “There are no Jews in my basement” and “There are no embezzled charity funds in my basement”. Obviously this isomorphism doesn’t extend to the morality of the lies, which makes it hard for me to see a connection between spreadability and immorality.
First, I’m not sure what it means to say that “There are no Jews in my basement” is unlikely to spread.
The Gestapo member is likely to have forgotten all about that specific lies by the time he finishes asking everyone on the block.
I can see an isomorphism (in terms of how the lie spreads) between “There are no Jews in my basement” and “There are no embezzled charity funds in my basement”. Obviously this isomorphism doesn’t extend to the morality of the lies, which makes it hard for me to see a connection between spreadability and immorality.
Disagree, the lies themselves are comparable, the difference in morality comes from the difference between the goals the lies are being used for.
One implication of this is that we can develop heuristics for how bad different lies. The basic idea is that lies that likely to spread (especially if their effectiveness depends on them spreading) are particularly bad. Especially if they’re likely to spread within your movement (note lies used to increase support for your movement count here, since they’ll bring in new recruits who believe them).
Note: that using these heuristics we can see that the classic example used to justify lying: “There are no Jews in my basement” is in fact much less bad then Yvain’s example: “A man is more likely to be struck by lightning than be falsely accused of rape.”
Would you elaborate?
First, I’m not sure what it means to say that “There are no Jews in my basement” is unlikely to spread. In a sense it’s a “pre-spread” lie, since the lack of Gestapo breaking down your doors implies that they are all already fairly confident of the falsehood; you’re just lying to decrease the probability that they’ll stop believing it.
Second, to add my own hypothetical: I can see an isomorphism (in terms of how the lie spreads) between “There are no Jews in my basement” and “There are no embezzled charity funds in my basement”. Obviously this isomorphism doesn’t extend to the morality of the lies, which makes it hard for me to see a connection between spreadability and immorality.
The Gestapo member is likely to have forgotten all about that specific lies by the time he finishes asking everyone on the block.
Disagree, the lies themselves are comparable, the difference in morality comes from the difference between the goals the lies are being used for.