In my experience, it is not that difficult to come up with an underlying reality where both are true—modulo some self-interested interpretation by both parties.
As long as the participants are aligned interpretations will mostly overlap but as people start to fight they tend to move diametrically apart.
And the participants will feel that their interpretation is “correct”—because—as Robin Hanson would say—that makes them more convincing to others.
I think there’s a simpler explanation which is not mentioned in the post: the set of object-level facts mentioned in both narratives can be literally correct, and the narratives have approximately zero added epistemic value on top of that.
This works because of selection bias. If you have a robot toss a fair coin 1000 times and you care about whether there are more heads than tails or vice versa, and there are two people who can reveal to you 100 coins each with the goal of convincing you of one result or the other, it’s obvious what will happen: the heads guy will reveal 100 heads, the tails guy will reveal 100 tails, and you’ll be no closer to the truth than you were before.
This is almost always the case in the real world because the number of yes/no questions (something like Kolmogorov complexity in this setting) that you need to characterize a situation is always vastly larger than the number that people can actually process on a case-by-case basis, especially if the parties are pushing disinformation to pollute all the relevant information channels (which they will be doing). The result is that anyone can find many factoids to support their position and none of them need to be lying.
In this case a lot of what are claimed as object-level facts are directly contradictory, but I like this point (and the way you made it) overall. If I quote you in the future elsewhere (e.g. FB or other essays) would you prefer that I paraphrase or copy-paste, and would you prefer attribution or anonymity?
Yeah, I agree in this case there are some contradictory claims but I don’t think they matter that much overall. That’s because I think the line between lying and deception is not that sharp and there’s an intuitively compelling sense in which the people revealing to you 100 heads or 100 tails out of 510 heads in 1000 tosses are “lying”, or at least “being deceptive”. That’s the bigger problem, and the fact that they might mix this up with conflicting claims about e.g. whether the 758th coin came up heads or tails is in my opinion not too important.
As for quoting me, you can do that whenever you want and however you want. You can attribute it to me or not, you can paraphrase it or not, et cetera. Totally up to you.
Not contradicting what you say: This is sometimes (or even quite often) true, but it’s still worth emphasizing that, if one side is in fact lying, while the other side is trying hard to be truthful, then your assumption would assign roughly equal blame, which incentivizes/rewards the lying. (It can be easy for bad actors to obfuscate the facts in a situation by lying about everything they think they can get away with.) So if you don’t want to be a force that makes the world worse, you have an obligation to (at least) strongly consider and investigate the possibility that one side is almost completely responsible for all discrepancies.
I wouldn’t assign “blame” at all. Given their past and their cognitive structure, they acted as made the most sense in the moment.
But yes, I would give them the same benefit of the doubt. But over time, you would learn more about who behaves trustworthy. You rarely learn that about public persons, but that mostly doesn’t matter.
Two people fighting is, in any case, an indication to not trust both.
But in public, you will only see fights because the good non-fighters are invisible. Thus: Do not watch the news.
>I wouldn’t assign “blame” at all. Given their past and their cognitive structure, they acted as made the most sense in the moment.
I concede that “blame” is a bad framing. What I’m trying to say is that, if we want to move towards a world where people are happier on average, it’s important to treat liars and self-deceivers differently than we’d treat people who don’t do these things. For instance, people who got caught doing these things repeatedly would no longer get the benefit of the doubt. (I think I’m saying something very trivial here, so we probably agree!)
>Two people fighting is, in any case, an indication to not trust both.
Kind of, when we think of loose correlations. But sometimes the connection is unwarranted for one of the participants of the fight, in which case it’s important to hold space for that possibility (that one side was unfairly accused or otherwise caught up in something, or [edit:] that the side was correctly accused but somehow managed to deny, attack and reverse victim and offender in the social environment’s perception of the incident).
If you get dragged into a fight for unfair reasons, it’s arguably kind of suboptimal to just give in to the attacker and avoid making a scene. “Be the bigger person” makes sense in a lot of circumstances, and even when it seems suboptimal, it can be very understandable. Still, there are situations where you definitely can’t fault people for fighting back. Sometimes people who fight are justly defending themselves against an attacker. Sometimes that’s brave, and other times that may even be their only option because the attack is an existential threat to their social standing, and they no longer have anything to lose. I think it’s the case somewhat frequently that attackers make things up or bizarrely misrepresent stuff. Dark triad personality traits (and vulnerable dark triad) tend to be involved, and while it’s possible for people with such traits to adhere to “Social Contract” principles, many don’t. (It’s often part of the symptom criteria that they don’t, but I definitely don’t want to demonize groups of people based on a cluster of criteria that doesn’t always present in the same way.)
Overall, I very much second Duncan’s norm of “withholding judgment” rather than a stance like “two people fighting is an indication to not trust both.” And I also think that it’s good to cultivate a desire to get to the bottom of things, rather than have an attitude of “oh, another fight, looks like the truth is somewhere in between.” That said, the original post by Duncan exemplifies that it’s often not practically possible to get to the bottom of it, and that’s arguably one of the most unfortunate things about human civilization.
if we want to move towards a world where people are happier on average, it’s important to treat liars and self-deceivers differently
Yes, we should design incentives such that the better outcome is the more likely outcome.
If you get dragged into a fight for unfair reasons, it’s arguably kind of suboptimal to just give in to the attacker and avoid making a scene.
Agree. My point was not to treat both people fighting equally forever. But initially, if all you know is that two people are fighting, that is some evidence for both of them being people you might want to avoid. I agree with “withholding judgment”—there is no reason to commit to one side or no side unless forced by sub-optimal mechanisms.
“trust” is too big a word to use here. Two people fighting is a reason not to invest much of your future happiness in either, unless you’re part of the fight. Fighting harms all participants, EVEN the righteous. Your best EV, absent any other connections to the fight or fighters, is not to join.
You MAY want to join the meta-fight, arguing for procedures or framing that make this fight less harmful or easier to decide correctly. This is what I think Duncan is trying to do with this post—to show that most of us can’t update very much in any direction based on this, so it should overall be given less weight than it is (in some circles, at least).
In my experience, it is not that difficult to come up with an underlying reality where both are true—modulo some self-interested interpretation by both parties.
As long as the participants are aligned interpretations will mostly overlap but as people start to fight they tend to move diametrically apart.
And the participants will feel that their interpretation is “correct”—because—as Robin Hanson would say—that makes them more convincing to others.
I think there’s a simpler explanation which is not mentioned in the post: the set of object-level facts mentioned in both narratives can be literally correct, and the narratives have approximately zero added epistemic value on top of that.
This works because of selection bias. If you have a robot toss a fair coin 1000 times and you care about whether there are more heads than tails or vice versa, and there are two people who can reveal to you 100 coins each with the goal of convincing you of one result or the other, it’s obvious what will happen: the heads guy will reveal 100 heads, the tails guy will reveal 100 tails, and you’ll be no closer to the truth than you were before.
This is almost always the case in the real world because the number of yes/no questions (something like Kolmogorov complexity in this setting) that you need to characterize a situation is always vastly larger than the number that people can actually process on a case-by-case basis, especially if the parties are pushing disinformation to pollute all the relevant information channels (which they will be doing). The result is that anyone can find many factoids to support their position and none of them need to be lying.
In this case a lot of what are claimed as object-level facts are directly contradictory, but I like this point (and the way you made it) overall. If I quote you in the future elsewhere (e.g. FB or other essays) would you prefer that I paraphrase or copy-paste, and would you prefer attribution or anonymity?
Yeah, I agree in this case there are some contradictory claims but I don’t think they matter that much overall. That’s because I think the line between lying and deception is not that sharp and there’s an intuitively compelling sense in which the people revealing to you 100 heads or 100 tails out of 510 heads in 1000 tosses are “lying”, or at least “being deceptive”. That’s the bigger problem, and the fact that they might mix this up with conflicting claims about e.g. whether the 758th coin came up heads or tails is in my opinion not too important.
As for quoting me, you can do that whenever you want and however you want. You can attribute it to me or not, you can paraphrase it or not, et cetera. Totally up to you.
Agree.
And nice summary. Do you know if there is a writeup of an explanation like that somewhere that I can refer to (except of course your comment)?
Not contradicting what you say:
This is sometimes (or even quite often) true, but it’s still worth emphasizing that, if one side is in fact lying, while the other side is trying hard to be truthful, then your assumption would assign roughly equal blame, which incentivizes/rewards the lying. (It can be easy for bad actors to obfuscate the facts in a situation by lying about everything they think they can get away with.) So if you don’t want to be a force that makes the world worse, you have an obligation to (at least) strongly consider and investigate the possibility that one side is almost completely responsible for all discrepancies.
I wouldn’t assign “blame” at all. Given their past and their cognitive structure, they acted as made the most sense in the moment.
But yes, I would give them the same benefit of the doubt. But over time, you would learn more about who behaves trustworthy. You rarely learn that about public persons, but that mostly doesn’t matter.
Two people fighting is, in any case, an indication to not trust both.
But in public, you will only see fights because the good non-fighters are invisible. Thus: Do not watch the news.
>I wouldn’t assign “blame” at all. Given their past and their cognitive structure, they acted as made the most sense in the moment.
I concede that “blame” is a bad framing. What I’m trying to say is that, if we want to move towards a world where people are happier on average, it’s important to treat liars and self-deceivers differently than we’d treat people who don’t do these things. For instance, people who got caught doing these things repeatedly would no longer get the benefit of the doubt. (I think I’m saying something very trivial here, so we probably agree!)
>Two people fighting is, in any case, an indication to not trust both.
Kind of, when we think of loose correlations. But sometimes the connection is unwarranted for one of the participants of the fight, in which case it’s important to hold space for that possibility (that one side was unfairly accused or otherwise caught up in something, or [edit:] that the side was correctly accused but somehow managed to deny, attack and reverse victim and offender in the social environment’s perception of the incident).
If you get dragged into a fight for unfair reasons, it’s arguably kind of suboptimal to just give in to the attacker and avoid making a scene. “Be the bigger person” makes sense in a lot of circumstances, and even when it seems suboptimal, it can be very understandable. Still, there are situations where you definitely can’t fault people for fighting back. Sometimes people who fight are justly defending themselves against an attacker. Sometimes that’s brave, and other times that may even be their only option because the attack is an existential threat to their social standing, and they no longer have anything to lose. I think it’s the case somewhat frequently that attackers make things up or bizarrely misrepresent stuff. Dark triad personality traits (and vulnerable dark triad) tend to be involved, and while it’s possible for people with such traits to adhere to “Social Contract” principles, many don’t. (It’s often part of the symptom criteria that they don’t, but I definitely don’t want to demonize groups of people based on a cluster of criteria that doesn’t always present in the same way.)
Overall, I very much second Duncan’s norm of “withholding judgment” rather than a stance like “two people fighting is an indication to not trust both.” And I also think that it’s good to cultivate a desire to get to the bottom of things, rather than have an attitude of “oh, another fight, looks like the truth is somewhere in between.” That said, the original post by Duncan exemplifies that it’s often not practically possible to get to the bottom of it, and that’s arguably one of the most unfortunate things about human civilization.
I think we are in violent agreement.
Yes, we should design incentives such that the better outcome is the more likely outcome.
Agree. My point was not to treat both people fighting equally forever. But initially, if all you know is that two people are fighting, that is some evidence for both of them being people you might want to avoid. I agree with “withholding judgment”—there is no reason to commit to one side or no side unless forced by sub-optimal mechanisms.
“trust” is too big a word to use here. Two people fighting is a reason not to invest much of your future happiness in either, unless you’re part of the fight. Fighting harms all participants, EVEN the righteous. Your best EV, absent any other connections to the fight or fighters, is not to join.
You MAY want to join the meta-fight, arguing for procedures or framing that make this fight less harmful or easier to decide correctly. This is what I think Duncan is trying to do with this post—to show that most of us can’t update very much in any direction based on this, so it should overall be given less weight than it is (in some circles, at least).