Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Followup to: What Evidence Filtered Evidence?
In “What Evidence Filtered Evidence?”, we are asked to consider a scenario involving a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time. Observing Heads is 1 bit of evidence for the coin being Heads-biased (because the Heads-biased coin lands Heads with probability 2⁄3, the Tails-biased coin does so with probability 1⁄3, the likelihood ratio of these is , and ), and analogously and respectively for Tails.
If such a coin is flipped ten times by someone who doesn’t make literally false statements, who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to decide to report those flips in particular. If they always report the 4th, 6th, and 9th flips independently of the flip outcomes—if there’s no evidential entanglement between the flip outcomes and the choice of which flips get reported—then reported flip-outcomes can be treated the same as flips you observed yourself: three Headses is 3 * 1 = 3 bits of evidence in favor of the hypothesis that the coin is Heads-biased. (So if we were initially 50:50 on the question of which way the coin is biased, our posterior odds after collecting 3 bits of evidence for a Heads-biased coin would be = 8:1, or a probability of 8/(1 + 8) ≈ 0.89 that the coin is Heads-biased.)
On the other hand, if the reporter mentions only and exactly the flips that came out Heads, then we can infer that the other 7 flips came out Tails (if they didn’t, the reporter would have mentioned them), giving us posterior odds of = 1:16, or a probability of around 0.06 that the coin is Heads-biased.
So far, so standard. (You did read the Sequences, right??) What I’d like to emphasize about this scenario today, however, is that while a Bayesian reasoner who knows the non-lying reporter’s algorithm of what flips to report will never be misled by the selective reporting of flips, a Bayesian with mistaken beliefs about the reporter’s decision algorithm can be misled quite badly: compare the 0.89 and 0.06 probabilities we just derived given the same reported outcomes, but different assumptions about the reporting algorithm.
If the coin gets flipped a sufficiently large number of times, a reporter whom you trust to be impartial (but isn’t), can make you believe anything she wants without ever telling a single lie, just with appropriate selective reporting. Imagine a very biased coin that comes up Heads 99% of the time. If it gets flipped ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
Toy models about biased coins are instructive for constructing examples with explicitly calculable probabilities, but the same structure applies to any real-world situation where you’re receiving evidence from other agents, and you have uncertainty about what algorithm is being used to determine what reports get to you. Reality is like the coin’s bias; evidence and arguments are like the outcome of a particular flip. Wrong theories will still have some valid arguments and evidence supporting them (as even a very Heads-biased coin will come up Tails sometimes), but theories that are less wrong will have more.
If selective reporting is mostly due to the idiosyncratic bad intent of rare malicious actors, then you might hope for safety in (the law of large) numbers: if Helga in particular is systematically more likely to report Headses than Tailses that she sees, then her flip reports will diverge from everyone else’s, and you can take that into account when reading Helga’s reports. On the other hand, if selective reporting is mostly due to systemic structural factors that result in correlated selective reporting even among well-intentioned people who are being honest as best they know how,[2] then you might have a more serious problem.
“A Fable of Science and Politics” depicts a fictional underground Society polarized between two partisan factions, the Blues and the Greens. “[T]here is a ‘Blue’ and a ‘Green’ position on almost every contemporary issue of political or cultural importance.” If human brains consistently understood the is/ought distinction, then political or cultural alignment with the Blue or Green agenda wouldn’t distort people’s beliefs about reality. Unfortunately … humans. (I’m not even going to finish the sentence.)
Reality itself isn’t on anyone’s side, but any particular fact, argument, sign, or portent might just so happen to be more easily construed as “supporting” the Blues or the Greens. The Blues want stronger marriage laws; the Greens want no-fault divorce. An evolutionary psychologist investigating effects of kin-recognition mechanisms on child abuse by stepparents might aspire to scientific objectivity, but being objective and staying objective is difficult when you’re embedded in an intelligent social web in which in your work is going to be predictably championed by Blues and reviled by Greens.
Let’s make another toy model to try to understand the resulting distortions on the Undergrounders’ collective epistemology. Suppose Reality is a coin—no, not a coin, a three-sided die,[3] with faces colored blue, green, and gray. One-third of the time it comes up blue (representing a fact that is more easily construed as supporting the Blue narrative), one-third of the time it comes up green (representing a fact that is more easily construed as supporting the Green narrative), and one-third of the time it comes up gray (representing a fact that not even the worst ideologues know how to spin as “supporting” their side).
Suppose each faction has social-punishment mechanisms enforcing consensus internally. Without loss of generality, take the Greens (with the understanding that everything that follows goes just the same if you swap “Green” for “Blue” and vice versa).[4] People observe rolls of the die of Reality, and can freely choose what rolls to report—except a resident of a Green city who reports more than 1 blue roll for every 3 green rolls is assumed to be a secret Blue Bad Guy, and faces increasing social punishment as their ratio of reported green to blue rolls falls below 3:1. (Reporting gray rolls is always safe.)
The punishment is typically informal: there’s no official censorship from Green-controlled local governments, just a visible incentive gradient made out of social-media pile-ons, denied promotions, lost friends and mating opportunities, increased risk of being involuntarily committed to psychiatric prison,[5] &c. Even people who privately agree with dissident speech might participate in punishing it, the better to evade punishment themselves.
This scenario presents a problem for people who live in Green cities who want to make and share accurate models of reality. It’s impossible to report every die roll (the only 1:1 scale map of the territory, is the territory itself), but it seems clear that the most generally useful models—the ones you would expect arbitrary AIs to come up with—aren’t going to be sensitive to which facts are “blue” or “green”. The reports of aspiring epistemic rationalists who are just trying to make sense of the world will end up being about one-third blue, one-third green, and one-third gray, matching the distribution of the Reality die.
From the perspective of ordinary nice smart Green citizens who have not been trained in the Way, these reports look unthinkably Blue. Aspiring epistemic rationalists who are actually paying attention can easily distinguish Blue partisans from actual truthseekers,[6] but the social-punishment machinery can’t process more than five words at a time. The social consequences of being an actual Blue Bad Guy, or just an honest nerd who doesn’t know when to keep her stupid trap shut, are the same.
In this scenario,[7] public opinion within a subculture or community in a Green area is constrained by the 3:1 (green:blue) “Overton ratio.” In particular, under these conditions, it’s impossible to have a rationalist community—at least the most naïve conception of such. If your marketing literature says, “Speak the truth, even if your voice trembles,” but all the savvy high-status people’s actual reporting algorithm is, “Speak the truth, except when that would cause the local social-punishment machinery to mark me as a Blue Bad Guy and hurt me and any people or institutions I’m associated with—in which case, tell the most convenient lie-of-omission”, then smart sincere idealists who have internalized your marketing literature as a moral ideal and trust the community to implement that ideal, are going to be misled by the community’s stated beliefs—and confused at some of the pushback they get when submitting reports with a 1:1:1 blue:green:gray ratio.
Well, misled to some extent—maybe not much! In the absence of an Oracle AI (or a competing rationalist community in Blue territory) to compare notes with, then it’s not clear how one could get a better map than trusting what the “green rationalists” say. With a few more made-up modeling assumptions, we can quantify the distortion introduced by the Overton-ratio constraint, which will hopefully help develop an intuition for how large of a problem this sort of thing might be in real life.
Imagine that Society needs to make a decision about an Issue (like a question about divorce law or merchant taxes). Suppose that the facts relevant to making optimal decisions about an Issue are represented by nine rolls of the Reality die, and that the quality (utility) of Society’s decision is proportional to the (base-two logarithm) entropy of the distribution of what facts get heard and discussed.[8]
The maximum achievable decision quality is ≈ 3.17.
On average, Green partisans will find 3 “green” facts[9] and 3 “gray” facts to report, and mercilessly stonewall anyone who tries to report any “blue” facts, for a decision quality of ≈ 2.58.
On average, the Overton-constrained rationalists will report the same 3 “green” and 3 “gray” facts, but something interesting happens with “blue” facts: each individual can only afford to report one “blue” fact without blowing their Overton budget—but it doesn’t have to be the same fact for each person. Reports of all 3 (on average) blue rolls get to enter the public discussion, but get mentioned (cited, retweeted, &c.) 1⁄3 as often as green or gray rolls, in accordance with the Overton ratio. So it turns out that the constrained rationalists end up with a decision quality of ≈ 3.03,[10] significantly better than the Green partisans—but still falling short of the theoretical ideal where all the relevant facts get their due attention.
If it’s just not pragmatic to expect people to defy their incentives, is this the best we can do? Accept a somewhat distorted state of discourse, forever?
At least one partial remedy seems apparent. Recall from our original coin-flipping example that a Bayesian who knows what the filtering process looks like, can take it into account and make the correct update. If you’re filtering your evidence to avoid social punishment, but it’s possible to clue in your fellow rationalists to your filtering algorithm without triggering the social-punishment machinery—you mustn’t assume that everyone already knows!—that’s potentially a big win. In other words, blatant cherry-picking is the best kind!
- ↩︎
I don’t quite want to use the word honest here.
- ↩︎
And it turns out that knowing how to be honest is much more work than one might initially think. You have read the Sequences, right?!
- ↩︎
For lack of an appropriate Platonic solid in three-dimensional space, maybe imagine tossing a triangle in two-dimensional space??
- ↩︎
As an author, I’m facing some conflicting desiderata in my color choices here. I want to say “Blues and Greens” in that order for consistency with “A Fable of Science and Politics” (and other classics from the Sequences). Then when making an arbitrary choice to talk in terms of one of the factions in order to avoid cluttering the exposition, you might have expected me to say “Without loss of generality, take the Blues,” because the first item in a sequence (“Blues” in “Blues and Greens”) is a more of a Schelling point than the second, or last, item. But I don’t want to take the Blues, because that color choice has other associations that I’m trying to avoid right now: if I said “take the Blues”, I fear many readers would assume that I’m trying to directly push a partisan point about soft censorship and preference-falsification social pressures in liberal/left-leaning subcultures in the contemporary United States. To be fair, it’s true that soft censorship and preference-falsification social pressures in liberal/left-leaning subcultures in the contemporary United States are, historically, what inspired me, personally, to write this post. It’s okay for you to notice that! But I’m trying to talk about the general mechanisms that generate this class of distortions on a Society’s collective epistemology, independently of which faction or which ideology happens to be “on top” in a particular place and time. If I’m doing my job right, then my analogue in a “nearby” Everett branch whose local subculture was as “right-polarized” as my Berkeley environment is “left-polarized”, would have written a post making the same arguments.
- ↩︎
Okay, they market themselves as psychiatric “hospitals”, but let’s not be confused by misleading labels.
- ↩︎
Or rather, aspiring epistemic rationalists can do a decent job of assessing the extent to which someone is exhibiting truth-tracking behavior, or Blue-partisan behavior. Obviously, people who are consciously trying to seek truth, are not necessarily going to succeed at overcoming bias, and attempts to correct for the “pro-Green” distortionary forces being discussed in this parable could easily veer into “pro-Blue” over-correction.
- ↩︎
Please be appropriately skeptical about the real-world relevance of my made-up modeling assumptions! If it turned out that my choice of assumptions were (subconsciously) selected for the resulting conclusions about how bad evidence-filtering is, that would be really bad for the same reason that I’m claiming that evidence-filtering is really bad!
- ↩︎
The entropy of a discrete probability distribution is maximized by the uniform distribution, in which all outcomes receive equal probability-mass. I only chose these “exactly nine equally-relevant facts/rolls” and “entropic utility” assumptions to make the arithmetic easy on me; a more realistic model might admit arbitrarily many facts into discussion of the Issue, but posit a distribution of facts/rolls with diminishing marginal relevance to Society’s decision quality.
- ↩︎
The scare quotes around the adjective “‘green’” (&c.) when applied to the word “fact” (as opposed to a die roll outcome representing a fact in our toy model) are significant! The facts aren’t actually on anyone’s side! We’re trying to model the distortions that arise from stupid humans thinking that the facts are on someone’s side! This is sufficiently important—and difficult to remember—that I should probably repeat it until it becomes obnoxious!
- ↩︎
You have three green slots, three gray slots, and three blue slots. You put three counters each on each of the green and gray slots, and one counter each on each of the blue slots. The frequencies of counters per slot is [3, 3, 3, 3, 3, 3, 1, 1, 1]. The total number of counters you put down is 3*6 + 3 = 18 + 3 = 21. To turn the frequencies into a probability distribution, you divide everything by 21, to get [1/7, 1⁄7, 1⁄7, 1⁄7, 1⁄7, 1⁄7, 1⁄21, 1⁄21, 1⁄21]. Then the entropy is , which simplifies to .
- Covid 1/7: The Fire of a Thousand Suns by 7 Jan 2021 17:00 UTC; 157 points) (
- Comment on “Endogenous Epistemic Factionalization” by 20 May 2020 18:04 UTC; 151 points) (
- Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think by 27 Dec 2019 5:09 UTC; 127 points) (
- Assume Bad Faith by 25 Aug 2023 17:36 UTC; 112 points) (
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Thinking About Filtered Evidence Is (Very!) Hard by 19 Mar 2020 23:20 UTC; 96 points) (
- Don’t Double-Crux With Suicide Rock by 1 Jan 2020 19:02 UTC; 91 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- All the posts I will never write by 14 Aug 2022 18:29 UTC; 54 points) (
- 31 Aug 2023 18:39 UTC; 53 points) 's comment on Introducing the Center for AI Policy (& we’re hiring!) by (
- Speaking Truth to Power Is a Schelling Point by 30 Dec 2019 6:12 UTC; 52 points) (
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- Prizes for last year’s 2019 Review by 20 Dec 2021 21:58 UTC; 40 points) (
- 17 Oct 2020 18:26 UTC; 35 points) 's comment on When does it make sense to support/oppose political candidates on EA grounds? by (EA Forum;
- 2 Nov 2019 17:09 UTC; 31 points) 's comment on AlphaStar: Impressive for RL progress, not for AGI progress by (
- 25 May 2022 14:33 UTC; 29 points) 's comment on EA can sound less weird, if we want it to by (EA Forum;
- Guidelines for Mad Entrepreneurs by 16 Sep 2022 6:33 UTC; 26 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by 2 Mar 2024 22:05 UTC; 26 points) (
- 23 Oct 2019 22:47 UTC; 26 points) 's comment on Deleted by (
- 16 Mar 2022 18:43 UTC; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- Comment on “Deception as Cooperation” by 27 Nov 2021 4:04 UTC; 23 points) (
- Algorithms of Deception! by 19 Oct 2019 18:04 UTC; 23 points) (
- 6 Nov 2021 21:52 UTC; 15 points) 's comment on [Book Review] “The Bell Curve” by Charles Murray by (
- 23 Oct 2019 15:43 UTC; 12 points) 's comment on Deleted by (
- 30 Dec 2020 3:39 UTC; 12 points) 's comment on Review Voting Thread by (
- 25 May 2022 3:01 UTC; 11 points) 's comment on Visible Homelessness in SF: A Quick Breakdown of Causes by (
- 22 Aug 2021 2:39 UTC; 10 points) 's comment on Exploring Democratic Dialogue between Rationality, Silicon Valley, and the Wider World by (
- Sentience, Sapience, Consciousness & Self-Awareness: Defining Complex Terms by 20 Oct 2021 13:48 UTC; 10 points) (
- 18 Jul 2020 7:11 UTC; 10 points) 's comment on My Dating Plan ala Geoffrey Miller by (
- 1 Jul 2021 16:32 UTC; 8 points) 's comment on Musings on general systems alignment by (
- 16 Sep 2022 20:41 UTC; 8 points) 's comment on Self-Embedded Agent’s Shortform by (
- 6 Dec 2021 22:24 UTC; 7 points) 's comment on Privacy and Manipulation by (
- 21 Jul 2020 17:47 UTC; 7 points) 's comment on My Dating Plan ala Geoffrey Miller by (
- 1 Aug 2020 19:07 UTC; 6 points) 's comment on Generalized Efficient Markets in Political Power by (
- 24 Nov 2019 19:42 UTC; 5 points) 's comment on Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary by (
- 27 Nov 2019 22:58 UTC; 5 points) 's comment on Mental Mountains by (
- 20 Oct 2019 7:42 UTC; 5 points) 's comment on Algorithms of Deception! by (
- 23 Apr 2022 20:55 UTC; 4 points) 's comment on Preregistration: Air Conditioner Test by (
- 26 Mar 2024 19:45 UTC; 4 points) 's comment on My Interview With Cade Metz on His Reporting About Slate Star Codex by (
- 20 Oct 2019 7:51 UTC; 2 points) 's comment on Algorithms of Deception! by (
This post improved my understanding of how censorship, polarization, groupthink, etc. work. I also love the slogan “Blatant cherry-picking is the best kind.”
I’ve referenced this post at least a few times when trying to discuss the nuances of contextualizing vs. decoupling norms.