Pascal’s wager / Pascal’s mugging is a situation in which small probabilities of large amounts of (dis)value[1] result in decisions which maximize expected utility, but which seem intuitively absurd. While many people have debated the rational response to Pascal’s muggings, there has been comparatively little discussion of principled thresholds of value and/or probability beyond which a wager should be considered Pascalian and therefore problematic. So, in this post, I raise the question, “what is your threshold for considering something a Pascal’s mugging and why?” and discuss some options for what probabilities you should consider too small to care about or what amount of value you should consider too large to let swamp your utility calculations.[2]
Option 1: Risk-neutral EU maximization
Just shut up and multiply your best estimates of the utility and probability, and pay out if the product is worth it. (For this option, I am assuming an unbounded utility function, but will discuss bounded options below.) The arguments in risk-neutrality’s favor are that it is the most straightforward / principled application of expected-utility theory, and avoiding it leads to logical contradictions (Wilkinson, 2020). “Why come up with a bunch of epicycles just because it seems weird to care about unlikely events?” argue the risk-neutral defenders.
Pascalian threshold
There are no Pascal’s muggings, only bad bets.
Cons
Being risk-neutral leads to logical contradictions (e.g. Kosonen, 2025, section 3).
It leads to conclusions that seem very weird.
But this may just be a bullet we have to bite.
Even if fanaticism is true in theory, I think in practice we will have to yield to some of the limits I discuss below, rather than being a perfect risk-neutral EU maximizer. I will discuss these counterpoints in the relevant sections, since they are also arguments for the respective limits.
Option 2: Ignore infinities
Pretty straightforward: Engage in risk-neutral EU maximization for only finite quantities.[3] If you don’t do this, you will fall prey to the criticisms of the original Pascal’s wager, including the fact that it can be used to argue for any arbitrary conclusion and that it is not clear which infinite option you should choose.[4]
Pascalian threshold
The only Pascal’s mugging is the old-fashioned kind where the thing at stake is of infinite value. But also, there will be situations where, for any finite payoff, the p of it is low enough for the expected utility to be less than the cost of paying up (and the threshold for this may be higher even than some of the other thresholds I will discuss below). A risk-neutral EU maximizer does not discount a situation a priori because it has high utility and/or low p, but of course there’s no need to be naive about expected utility calculations. You can, for example, use game-theoretic justifications for not paying a literal Pascal’s mugger[5] or calculate that the p of payout is much lower than the mugger suggests.[6]
Cons
But like, infinities could be real and valuable, and it seems like we should have some way to weigh them.[7]
Other people have tried to solve this problem (E.g. Bostrom, 2011), and I’m not sure I have anything to add here.
Option 3: Bounded utility function
Have a utility function with a horizontal asymptote, so even arbitrarily large amounts of value will not provide arbitrarily large amounts of utility. (In subsequent examples, I will assume that the utility function is either unbounded or has a high enough bound for the relevant probabilities to make a difference.) Alice Blair makes an argument for why utility might asymptote: It does intuitively seem like things cannot get better and better without end. Another argument in favor of bounded utility is that if you have an unbounded utility function, it leads to logical contradictions (Christiano, 2022; McGee, 1999).
Pascalian threshold
Increasing the payoff of a bet will only increase its expected utility up to a point if you have bounded utility, and therefore if it is very low-p, it may have negligible EU no matter how large the value. However, it’s unclear at what level this happens, and it could be very high.
Cons
If you have a bounded utility function, it leads to logical contradictions (here; Kosonen, 2022, Ch. 1).
Maybe this is just a failure to imagine how experiences could have arbitrarily high utility, or we are reifying the fact that humans get diminishing returns to resources / positive experiences, which may be a fickle fact of our circumstances.
If you have some term in your utility function that corresponds to your subjective experience of pleasure or suffering or to whether your preferences are satisfied according to the preferences you have at the time when they are satisfied, surely a sufficiently powerful agent could modify your preferences to be unbounded, which could cause your overall utility function to be unbounded. See footnote for elaboration.[8]
Option 4: Limit from bounded information
This is the problem of the old Eliezer Yudkowsky post about “Pascal’s Muggle”
“[T]here’s just no way you can convince me that I’m in a position to affect a googolplex people, because the prior probability of that is one over googolplex. [...] [T]o conclude something whose prior probability is on the order of one over googolplex, I need on the order of a googol bits of evidence, and you can’t present me with a sensory experience containing a googol bits. Indeed, you can’t ever present a mortal like me with evidence that has a likelihood ratio of a googolplex to one—evidence I’m a googolplex times more likely to encounter if the hypothesis is true, than if it’s false—because the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex. You know the old saying about how once you assign something probability one, or probability zero, you can’t update that probability regardless of what evidence you see? Well, odds of a googolplex to one, or one to a googolplex, work pretty much the same way.”
Pascalian threshold
The threshold is when the amount of utility is so large that it implies a leverage prior[9] so low that no amount of information you are likely to ever obtain could render it plausible enough to be worth paying. Yudkowsky said this amount of information (the most a human can possibly get) is, so the p threshold must be ~.
Con
The problem talked about in the Pascal’s Muggle post: This means that even information that seems intuitively persuasive should not convince you that the probability is macroscopic.
However, maybe this is just a bullet we have to bite, if we really think the probability is that low based on the leverage prior.
Yudkowsky resolves this with a retroactive “superupdate” to his prior, but I’m not sure this is principled in this case, mostly because as mentioned above, the evidence could be hallucinated.[10]
Option 5: Background uncertainty
Christian Tarsney (2020) writes that, when making decisions based on stochastic dominance, background uncertainty can make it rationally permissible to ignore sufficiently small probabilities of extreme payoffs, even if these options are superior in terms of EU. Suppose you are trying to compare options based on the total amount of utility that will exist after acting on each option, but you have uncertainty about how much utility will exist “in the background” independent of your choice. Depending on the shape of this uncertainty, when you convolve it with the p distribution of a pascalian bet, it may spread out the high-value outcomes such that they become negligible relative to background noise and this bet fails to stochastically dominate alternative options.
Pascalian threshold
This does not establish a consistent threshold, but rather the threshold depends on the ratio of the interquartile range of the agent’s background uncertainty to the value of the non-pascalian alternative at hand. Tarsney has suggested that this might be around 1 in 1 billion for practical purposes.
Cons
This only says that it’s permissible to reject muggings below the pascalian threshold, but gives no guidance on whether you should or should not.
This reasoning would not apply to e.g. difference-making views.
Option 6: Bounded confidence
Perhaps we suggest that there is some bound under which we cannot be confident that our estimates are meaningful, because we are computationally bounded and it’s hard to precisely calculate small probabilities. Plausibly for very small probabilities, any EU calculation will be little more than conjecture, so it is best to just stick with sure bets.
Pascalian threshold
It depends on how familiar we are with the territory and how complex the hypothesis is: We can have more confidence about simple hypotheses on topics that we know well. For example, Yudkowsky said that he is skeptical that one can be 99.99% confident that 53 is a prime number; by contrast, he said that he “would be willing to assign a probability of less than 1 in 10^18 to a random person being a Matrix Lord.”[11]
Cons
(This also applies for all the below reasons for setting some level below which we write off a p as negligible):
As Scott Alexander pointed out in this comment, you can divide risks into arbitrarily small pieces, so it is unclear whether it falls above the threshold or not.
Option 7: Incomparability with worse/better options
Related to the prior option, and related to a response to the original Pascal’s wager—given uncertainty about unlikely events, maybe a greater expected benefit will be created by rejecting the mugger than by paying him. If you can’t be certain he’s not a matrix lord, how can you be certain that he won’t do the opposite of what he claims just to mess with you? This hypothesis seems pretty strange, but can we be sure it’s strictly lower p than that of the mugger holding up his end of the deal, especially given our unfamiliarity with interdimensional extortionists? If we are trying to maximize expected utility, then surely it is wrong to do x if we expect there is higher p that not-x will result in equal or greater utility; or, if we are unsure which is higher-EU, it may at least be permissible to choose either.
Pascalian threshold
Probably depends on the situation, but at similar levels to the previous point.
Con
If we are willing to use weird unlikely outcomes as an antidote to other weird unlikely outcomes, this would seem like it gets us into some weird conclusions where we can come up with counterintuitive reasons to avoid doing things that seem pretty mundane (see footnote for elaboration).[12]
Option 8: Iteratability
One of the reasons for acting based on expected value is that if you make the same bet enough times, on average the payout will be close to your expected value. However, for low probability bets, this is unlikely to be the case if you don’t get the chance to make similar bets enough times. Thus Kaj Sotala suggests basing your risk-aversion on the likelihood that a bet of a given probability will be repeated enough times to pay off in your lifetime.
Pascalian threshold
Quoting Sotala:
Define a “probability small enough to be ignored” [...] such that, over your lifetime, the expected times that the event happens will be less than one.
Sotala also considers using thresholds other than one based on your level of risk aversion.
Cons
Unless you are purely selfish, or the payoff from a bet can only benefit you, it seems that you might want people to make bets that will not pay off in their lifetimes, because over generations, if people keep making these bets, one of them might pay off.
Even if we account for this, iteratability still imposes a threshold that would prevent bets that will, in expectation, never come true in the whole lifespan of the universe. But in that case, why should we only care about “our” universe, if there are plausibly other multiverses in which the bet may turn out well?
Several other flaws are discussed in the comments to the post, including that we may want the expected number of times a really bad risk is realized to be less than one, in which case we are just picking an arbitrary level at which to be risk-averse.
Option 9: vibes
If the p of something is small, it seems paranoid to worry about it. It probably won’t happen, and average Joes will make fun of you for it. What more is there to say?[13]
Pascalian threshold
IDK, 10%? But not for things that seem normal like wearing seatbelts.
Con
It’s arbitrary.
Other options
Maybe whether we care about a risk should not be based on the probability itself, but rather based on how certain we are of this probability estimate, i.e., if we have a probability estimate of x that is 5%, but this is mostly based on abstract reasoning that we can’t empirically confirm, and we have pretty wide error bars—it could be 0.5% or 50% for all we know—whereas p(y) is only 0.1%, but this is based on a heap of empirical evidence and we have a pretty tight confidence interval around this estimate, maybe we should care about y and not x. But this seems like we are systematically ignoring risks/benefits that we can’t get good evidence about, and yet these risks/benefits will nonetheless effect us.[14] Cf. generally: streetlight effect, cluelessness, “No Evidence” Is A Red Flag.
There may be other practical reasons to not act on probabilities below a certain threshold that I haven’t thought of; I’d be interested to hear thoughts.
- ^
Hereafter, I will refer to utility, value, etc., in the positive direction for simplicity, but most of these points apply to disutility, disvalue, etc.
- ^
NB this post is not intended to discuss theological or game-theory implications that result from specific Pascal’s wager/mugging thought experiments, but rather the principle behind how to deal with small probabilities generally.
- ^
And use some other decision procedure for infinite quantities.
- ^
E.g. Alan Hajek:
Now, suppose you wager for God if and only if your lottery ticket wins in the next lottery. And let’s suppose there’s a billion tickets in the lottery. One in a billion times infinity is still infinity. [...] I wait to see whether a meteor quantum tunnels through this room before the end of our interview. Some tiny probability of this happening — I don’t know, one in a googolplex, call it — multiply that by infinity, and I have infinite expected utility for this strategy. Wager for God if and only if the meteor happens. And now it starts to look like whatever I do, there’s some positive probability that I will get the infinite payoff [...]
- ^
I.e., that it will encourage others to Pascal’s-mug you.
- ^
See, e.g., discussion of the leverage penalty here.
- ^
You might think it O.K. to ignore infinite value in the usual case where the p is small, and in most cases that will turn out fine, but I think even the most antipascalian person would say that we need to reckon with infinities if their p gets into macroscopic percentages.
- ^
Suppose you are a bounded-utility paperclip maximizer and are indifferent between staples and thumbtacks. Some entity hacks into you and will either (A) give you a bunch of staples and modify you to be an (unbounded) staple maximizer or (B) create a bunch of thumbtacks and modify you to be an (unbounded) thumbtack minimizer. Although both rank poorly on your current utility function, since they do not lead to more paperclips, B is clearly worse. You can have a utility function for which this is not the case, but I think most people would prefer A over B, provided that the easily satisfied utility function they are being modified to, A, is not something they currently find abhorrent. This opens up the possibility that the hacker will create situations that cause you unbounded amounts of (dis)utility, say by giving you arbitrary amounts of staples or thumbtacks. A similar argument could be made for hedonic utilitarianism, but I am using preference utilitarianism for simplicity. The response to this can be that the utility can still be bounded if we apply a bounded function to the whole thing (i.e. our function is like f(V+W) where V is our evaluation of things according to our present utility function and W is whether our future preferences are satisfied, and f(x) is some bounded function like a logistic curve). I don’t have a strict logical response to this, except that it seems pretty counterintuitive to place only bounded utility on experiences that will be causing you unbounded utility when you are experiencing them. But maybe this is less counterintuitive than alternatives.
- ^
Robin Hanson has suggested that the logic of a leverage penalty should stem from the general improbability of individuals being in a unique position to affect many others [...]. At most 10 out of 3↑↑↑3 people can ever be in a position to be “solely responsible” for the fate of 3↑↑↑3 people if “solely responsible” is taken to imply a causal chain that goes through no more than 10 people’s decisions.
- ^
See also here although I don’t necessarily agree that this implies that utility is bounded.
- ^
Presumably because of the higher Kolmogorov-complexity of the latter idea.
- ^
Imagine your friend gives you a lottery ticket for your birthday, and the jackpot is $50 million, and your probability of winning is 1 in 10 million, so, in expectation, this ticket is worth $5. But—what if you are in a simulation and the beings running the simulation will pessimize your utility function because they don’t like gambling? Is the chance of this higher than you winning the lottery? The universe is big and there could be a lot of simulations out there, and a lot of religions are against gambling; maybe the simulators put that idea in our culture for a reason. I’m not saying you should put macroscopic p on this hypothesis, but are you really 99.99999% sure it’s false—or rather, sure enough for the risk to be worth the $5 in expectation from the lottery ticket? You could say there’s also a tiny chance that the simulators will reward you for gambling, but this is even more speculative than the hypothesis I just laid out. But now we have walked into a Pascal’s wager by omission in order to avoid one by comission. And if arbitrarily low p of high value is influencing our decisions, this should also apply to bets that are more certain than the lottery. Maybe we can use complexity penalties or other adjustments to make the “weird” hypotheses lower-EU than the mundane gambles. But this may fail for reasons that have been discussed elsewhere, so maybe we just bite the bullet and start engaging in simulationist deism.
- ^
If I were to actually try to steelman this position, I would say:
You do not need a complicated logical reason for behaving a certain way, it just needs to work. Human beings have to survive in an environment where there are real risks we have to avoid but we also can’t let ourselves get Pascal’s-mugged or be paralyzed by fear of speculative harms, so natural selection, cultural evolution, and in-lifetime learning pushed us toward an optimal level of risk aversion.
However, I think this fails because (1) if I have to come up with a sophisticated reason for a position that the advocates of that position never state, I am probably giving it more credit than it deserves, (2) intuitive risk aversion will likely transfer poorly to novel situations, and (3) it already fares poorly in the current environment (e.g. Wilkinson, 2020, section 2).
- ^
Holden Karnofsky gives a plausible version of this strategy here.
Option 10: Kelly betting.
If you bet repeatedly on a gamble in which with probability p you win k times what you bet, and otherwise lose your bet, the fraction of your wealth to bet that maximises your growth rate is p−(1−p)/k. This implies that no matter how enormous the payoff, you should never bet more than p of your wealth. The probability you assign to unsubstantiated promises from dodgy strangers should be very small, so you can safely ignore Pascal’s Wager.
Suppose the mugger says that if you don’t give him $5, he’ll take away 99.999999999999999% of your wealth. I don’t think Kelly bets save you there? The logarithms of Kelly bets help you on the positive side but hurt you on the negative side.
Kelly bets only apply to the situation where you have a choice to gamble or not, and not gambling leaves your wealth unaffected. When the Kelly bet is negative, that means you should decline the bet.
If the mugger is capable of confiscating 99.999999999999999% of your wealth, why is he offering the bet?
TLDR Kelly bets are risk avoidant. I think Kelly bets prevent you from pouring all your money into a pascal-mugging change of winning ungodly sums of money, but Kelly bets will pay a mugger exorbitant blackmail to avoid a pascal-mugging chance losing even a realistic amount money
---------
Starting with a pedantic point. None of the Pascal Mugging situations we’ve talked about are true Kelly bets. The mugger is not offering to multiply your cash bet if you win. Your winnings are saved lives, and they cannot be converted into a payroll.
But we can still translate a Pascal’s mugging into the language of a Kelly bet. A translation of the standard Pascal mugging might be: the mugger offers to googolplex-le your money[1], and you think he has a one in a trillion chance of telling the truth. A Kelly bet would say that despite these magnificent EV of the payouts, you should put only ≈a trillionth of your wealth into this bet. So in this case, the one like the original Pascal’s mugging, the one you responded to, the Kelly bet does the “right” thing and doesn’t pay the mugger.
But now suppose the Pascal mugger says “I am a jealous god. If you don’t show your belief in Me by paying Me $90,000 (90% of your wealth), I will send you and a googolplex other people to hell and take all (or all but a googolplexth) of your wealth”. And suppose you think that there’s a 1 in 1 trillion chance he’s telling the truth.
Can we translate this into a Kelly bet? Yes! (I think?) The Kelly criteria tells you how to allocate your portfolio among many assets. Normally we assume there’s a “safe” asset, a “null” asset, one where you are sure to get exactly you money back (the asset into which you put most of your portfolio when you make a small bet). But that asset is optional. We can model this Kelly bet by saying there are two assets into which you can allocate your portfolio. Asset A’s payoff is “return 10% (loses 90%) of the bet with certainty”. Asset B’s payoff is “with probability 999,999,999,999⁄1 trillion (almost 1), return your money even, but with chance 1 in 1 trillion, lose ≈everything”. There is no “safe cash” option—you must split your portfolio between assets A and B.
Here, Kelly criterion really, really hates losing ≈all your bankroll. It says to put almost everything into the safe asset A (pay the mugger), because even a 1 in 1 trillion chance of losing ≈everything isn’t worth it. Log of ≈0 (lost almost everything) is a very negative number.
Perhaps it would be useful to write exact math out.
Importantly, I think for the math to work out he has to be offering a payoff proportional to your bet, not a fixed payoff?
Good point, this combines the iteratability justification for EV plus the fact that we have finite resources with which to bet. But doesn’t this break down if you are unsure how much wealth you have (particularly if the “wealth” being gambled is non-monetary, for example years of life)?
Suppose the devil comes to you and says “if you take my bet you can live out your full lifespan, but there will be a 1 in 1 million chance I will send you to Hell at the end for 100 billion years. If you refuse, you will cease to exist right now.” Well, the wealth you are gambling with is years of life, but it’s unclear how many you have to gamble with.We could use whatever our expected number of years is (conditional on taking the bet) but of course, then we run back into the problem that our expectations can be dominated by tiny probabilities of extreme outcomes. This isn’t just a thought experiment since we all make gambles that may affect our lifespan, and yet we don’t know how long we would have lived by default.Edit: realized that the devil example has the obvious flaw that as the expected default lifespan increases, so does the amount of years that you’re wagering, so you should always take the bet based on Kelly betting, but this point is more salient with less Pascalian lifespan-affecting gambles. I guess the question that remains is that the gamble is all or nothing, so what do we do if Kelly betting says we should wager 5% of our lifespan? Maybe the answer is: bet your life 5% of the time, or make gambles that will end your life with no more than 5% probability.
The uncertainties that will always be present for a real gamble make the Kelly bet rash, uncertainties about not only the numbers, but about whether the preconditions for the criterion obtain.
Because of this, Zvi recommends that Kelly is the right way to think, and you should evaluate the Kelly recommendation as best you can, but you should then bet no more than 25% to 50% of that amount. Further elaboration here.
You should bound your utility function (not just probabilities) on how much information your brain can handle. Your utility function’s dynamic range should never outpace your brain’s probability’s dynamic range. Also you shouldn’t claim to put $Googolpex utility on anything until you’re at least Ω(log(googolplex))[1] seconds old.
Utility functions come from your preferences over lotteries. Not every utility function corresponds to a reasonable preference over lotteries. You can claim “My utility function assigns a value of Chaitin’s constant to this outcome”, but that doesn’t mean you can build a finite agent that follows that utility function (it would be uncomputable). Similarly, you can claim “my agent follows a utility function assigns to outcomes A B and C values of $0, $1, and $googolplex”, but you can’t build such a beast with real physics (you’re implicitly claiming your agent can distinguish between probabilities so fine that no computer with memory made from all the matter in the eventually observable universe could compute it).
And (I claim) almost any probability you talk about should be bounded by O(2^(number of bits you’ve ever seen)). That’s because (I claim) almost all your beliefs are quasi-empirical, even most of the a priori ones. For example, Descartes considered the proposition “The only thing I can be certain of is that I can’t be certain of anything” before quasi-empirically rejecting that proposition in favor of “I think, therefore I am”. Descartes didn’t just know a priori that proposition was false—he had to spend some time computing to gather some (mental) evidence. It’s easy to quickly get probabilities exponentially small by collecting evidence, but you shouldn’t get them more than exponentially small.
You know the joke about the ultrafinitist mathematician who says he doesn’t believe in the set of all integers? A skeptic asks “is 1 an integer?” and the ultrafinitist says “yes”. The skeptic asks “is 2 an integer?”, the ultrafinitist wait’s a bit, then says “yes”. The skeptic asks “is 100 an integer?”, the ultrafinitist waits a bit, waits a bit more, then says “yes”. This continues, with the ultafinitist waiting more and more time before confirming the existence of bigger and bigger integers, so you can never catch him in a contradiction. I think you should do something like that for small probabilities.
Big Omega notation. “Grows at least that fast, with fudge factor constants”.
Not sure I fully understand this comment, but I think it is similar to option 4 or 6?
Why is seconds the relevant unit of measure here?
Yep! With the addendum that I’m also limiting the utility function by the same sorts of bounds. Eliezer in Pascal’s Muggle (as I interpret him, though I’m putting words in his mouth) was willing to bound agents subjective probabilities, but was not willing to bound agents utility functions.
The real unit is “how many bits of evidence you have seen/computed in your life”. The number of seconds you’ve lived is just something proportional to that—the Big Omega notation fudges away proportionality constant.
As the offer gets bigger, it is more likely to be a lie, mistake, or misunderstanding.
Offer A: “If you drive me to San Francisco, I’ll pay you twenty dollars.”
Offer B: “If you drive me to San Francisco, I’ll pay you a billion dollars.”
Offer C: “If you drive me to San Francisco, I’ll pay you with this gemstone.” (Which looks like a very valuable diamond to you.)
Offer D: “If you drive me to San Francisco, I’ll pay you 3↑↑↑3 dollars.”
Offer A is credible — but you’re being underpaid; at least make them pay the tolls too.
Offer B is conceivable, but not readily believable. The chance that someone is going to pay you a billion dollars to drive them to SF is very, very low. Perhaps they’re just lying to you. Or maybe you misheard them over the noise of traffic. Maybe they actually said “If you [build a robot car that can] drive me to San Francisco, I’ll pay you a billion dollars [to acquire your company]” and you didn’t hear the bracketed parts.
Offer C is likewise conceivable, and you probably even heard them right. But you’re probably mistaken about reality — the gemstone is probably moissanite or cubic zirconia or something other than diamond.
Offer D is not conceivably possible. Either they are just making stuff up, or someone is confused about what ↑ means, or you’ve been tricked and they really said “three, um um um, three doll hairs”.
But what about Pascal’s Muggle? If you want to cancel out 3↑↑↑3 by multiplying it with a comparably small probability, the probability has to be incredibly, incredibly, small; smaller than a Bayesian can update after 13 billion years of viewing evidence. So where did that small number come from? If the super-exponenctial smallness came from priors, then you can’t update away from it reasonably—you’re always going to believe the proposition is false, even if given an astronomical amount of evidence. Are you biting the bullet and saying that even if you find yourself in a universe where this sort of thing seems normal and like it will happen all the time, you will say a priori this this apparently normal stuff is impossible?
There are claims for which believing the claim would require more confidence than I have in my own thought processes. That is, if I think I have evidence for X, I should first doubt whether my thinking has run astray and ceased to be connected to reality, rather than going ahead and believing X.
After all, it’s not just the claim that can be true or false. My reasoning can run truly or falsely too. There are circumstances under which self-doubt is the correct mental motion: “The fact that I am about to believe this claim is itself evidence. What has been true about others who have come to believe claims like this one?”
Example: Occasionally, a human will come to believe that God is telling them to go murder a bunch of people. As far as anyone can tell, they have all been wrong. And the world would be better off if each of them had thought, “Huh, everyone else who’s ever come to this conclusion turned out to be wrong. I wonder if maybe I’m having a schizophrenia or something?”