SA’s piece has moved on somewhat, and (1) the “hostile environment” bit is not talking about the contemporary US but about (a somewhat hypothetical version of) the Stalin-era USSR
It doesn’t seem to me like the setting of the illustrative examples should matter, though? The problem of bounded distrust should be qualitatively the same whether your your local authorities lie a lot or only a little. Any claims I advance about human rationality in Berkeley 2023 should also hold in Stalingrad 1933, or African Savanna −20,003, or Dyson Sphere Whole-Brain Emulation Nature Preserve 2133.
about an entirely different question: how should we interact with people whose assessment of the honesty of what those would-be authorities are saying is different from ours?
I think they’re related! The general situation is: agent A broadcasts claim K, either because K is true and A wants Society to benefit from knowing this, or because A benefits from Society believing K. Agents B and C have bounded distrust towards A, and are deciding whether they should believe K. B says that K doesn’t seem like the sort of thing A would lie about. From C’s perspective, this could be because it really is true that K isn’t the sort of thing that A would lie about—or it could be that A and B are in cahoots.
Section IV. of “Bounded Distrust” opens with the case where A = “credentialed experts”, K = “ivermectin doesn’t work for COVID”, B = “Scott Alexander”, and C = “Alexandros Marinos”. But the problem should be the same if A = “Chief Ugg”, K = “there’s a lion across the river”, or A = “the Dyson Sphere Whole-Brain Emulation Nature Preserve Tourism Board”, K = “Norton AntiVirus works for cyber-shingles”, &c.
The general problem is that agents with different interests sometimes have an incentive to distort shared maps, so it’s very naïve to say “it’s important for these two types of people to understand each other” as if differences in who one trusts were solely due to differences in map-correction skill (mistake theory), rather than differences in who one trusts to not distort shared maps to one’s own detriment (conflict theory).
(Thanks for commenting! You’re really challenging me to think about this more deeply. This post came about as a 20x wordcount expansion of a Tweet, but now that your criticism has forced me to generalize it, I’m a little worried that my presentation of the core rationality insight got “contaminated” by inessential details of my political differences with Scott; it seems like there should be a clearer explanation for my intuition that mistake theory corresponds with the “loyalist” rather than the “dissident” side of a conflict—something about how power can make contingent arrangements seem more “natural” than they really are?—and I’m not immediately sure how to make that crisp, which means my intuition might be wrong.)
It doesn’t seem to me like the setting of the illustrative examples should matter, though? The problem of bounded distrust should be qualitatively the same whether your your local authorities lie a lot or only a little. Any claims I advance about human rationality in Berkeley 2023 should also hold in Stalingrad 1933, or African Savanna −20,003, or Dyson Sphere Whole-Brain Emulation Nature Preserve 2133.
I think they’re related! The general situation is: agent A broadcasts claim K, either because K is true and A wants Society to benefit from knowing this, or because A benefits from Society believing K. Agents B and C have bounded distrust towards A, and are deciding whether they should believe K. B says that K doesn’t seem like the sort of thing A would lie about. From C’s perspective, this could be because it really is true that K isn’t the sort of thing that A would lie about—or it could be that A and B are in cahoots.
Section IV. of “Bounded Distrust” opens with the case where A = “credentialed experts”, K = “ivermectin doesn’t work for COVID”, B = “Scott Alexander”, and C = “Alexandros Marinos”. But the problem should be the same if A = “Chief Ugg”, K = “there’s a lion across the river”, or A = “the Dyson Sphere Whole-Brain Emulation Nature Preserve Tourism Board”, K = “Norton AntiVirus works for cyber-shingles”, &c.
The general problem is that agents with different interests sometimes have an incentive to distort shared maps, so it’s very naïve to say “it’s important for these two types of people to understand each other” as if differences in who one trusts were solely due to differences in map-correction skill (mistake theory), rather than differences in who one trusts to not distort shared maps to one’s own detriment (conflict theory).
(Thanks for commenting! You’re really challenging me to think about this more deeply. This post came about as a 20x wordcount expansion of a Tweet, but now that your criticism has forced me to generalize it, I’m a little worried that my presentation of the core rationality insight got “contaminated” by inessential details of my political differences with Scott; it seems like there should be a clearer explanation for my intuition that mistake theory corresponds with the “loyalist” rather than the “dissident” side of a conflict—something about how power can make contingent arrangements seem more “natural” than they really are?—and I’m not immediately sure how to make that crisp, which means my intuition might be wrong.)