Fixedness From Frailty

Thinking about two separate problems has caused me to stumble onto another, deeper problem. The first is psychic powers-what evidence would convince you to believe in psychic powers? The second is the counterfactual mugging problem- what would you do when presented with a situation where a choice will hurt you in your future but benefit you in a future that never happened and never will happen to the you making the decision?

Seen as a simple two-choice problem, there are some obvious answers: “Well, he passed test X, Y, and Z, so they must be psychic.” “Well, he passed text X, Y, and Z, so that means I need to come up with more tests to know if they’re psychic.” “Well, if I’m convinced Omega is genuine, then I’ll pay him $100, because I want to be the sort of person that he rewards so any mes in alternate universes are better off.” “Well, even though I’m convinced Omega is genuine, I know I won’t benefit from paying him. Sorry, alternate universe mes that I don’t believe exist!”

I think the correct choice is the third option- I have either been tricked or gone insane.1 I probably ought to run away, then ask someone who I have more reason to believe is non-hallucinatory for directions to a mental hospital.

The math behind this is easy- I have prior probabilities that I am gullible (low), insane (very low), and that psychics /​ Omega exist (very, very, very low). When I see that the result of test X, Y, and Z suggests someone is psychic, or see the appearance of an Omega who possesses great wealth and predictive ability, that is generally evidence for all three possibilities. I can imagine evidence which is counter-evidence for the first but evidence for the second two, but I can’t imagine the existence of evidence consistent with the axioms of probability which increases the possibility of magic (of the normal or sufficiently advanced technology kind) to higher than the probability of insanity.2

This result is shocking and unpleasant, though- I have decided some things are literally unbelievable, because of my choice of priors. P(Omega exists | I see Omega)<<1 by definition, because any evidence for the existence of Omega is at least as strong evidence for the non-existence of Omega because it’s evidence that I’m hallucinating! We can’t even be rescued by “everyone else agrees with you that Omega exists,” because the potential point of failure is my brain, which I need to trust to process any evidence. It would be nice to be a skeptic who is able to adapt to the truth, regardless of what it is, but there seems to be a boundary to my beliefs created by the my experience of the human condition. Some hypotheses, once they fall behind, simply cannot catch up with other competing hypotheses.

That is the primary consolation: this isn’t simple dogmatism. Those priors are the posteriors of decades of experience in a world without evidence for psychics or Omegas but where gullibility and insanity are common- the preponderance of the evidence is already behind gullibility or insanity as a more likely hypothesis than a genuine visitation of Omega or manifestation of psychic powers. If we lived in a world where Omegas popped by from time to time, paying them $100 on a tails result would be sensible. Instead, we live in a world where people often find themselves with different perceptions from everyone else, and we have good reason to believe their data is simply wrong.

I worry this is an engineering answer to a philosophical problem- but it seems that a decision theory that adjusts itself to give the right answer in insane situations is not going down the right track. Generally, a paradox is a confusion in terms, and nothing more- if there is an engineering sense in which your terms are well-defined and the paradox doesn’t exist, that’s the optimal result.

I don’t offer any advice for what to do if you conclude you’re insane besides “put some effort into seeking help,” because that doesn’t seem to me to be a valuable question to ponder (I hope to never face it myself, and don’t expect significant benefits from a better answer). “How quickly should I get over the fact that I’m probably insane and start realizing that Narnia is awesome?” does not seem like a deep question about rationality or decision theory.

I also want to note this is only a dismissal of acausal paradoxes. Causal problems like Newcomb’s Problem are generally things you could face while sane, keeping in mind that you can’t tell the difference between an acausal Newcomb’s Problem (where Omega has already filled or not filled the box and left it alone) and a causal Newcomb’s Problem (where the entity offering the choice has rigged it so selecting both box A and B obliterates the money in box B before you can open it). Indeed, the only trick to Newcomb’s Problem seems to be sleight of hand- the causal nature of the situation is described as acausal because of the introduction of a perfect predictor and that description is the source of confusion.

1- Or am dreaming. I’m going to wrap that into being insane- it fits the same basic criteria (perceptions don’t match external reality) but the response is somewhat different (I’m going to try and enjoy the ride /​ wake up from the nightmare rather than find a mental hospital).

2- I should note that I’m not saying that the elderly, when presented with the internet, should conclude they’ve gone insane. I’m saying that when a genie comes out of a bottle, you look at the situation surrounding it, not its introduction- “Hi, I’m a FAI from another galaxy and have I got a deal for you!” shouldn’t be convincing but “US Robotics has just built a FAI and collected tremendous wealth from financial manipulation” could be, and the standard “am I dreaming?” diagnostics seem like they would be valuable, but “am I insane?” diagnostics are harder to calibrate.

EDIT- Thanks to Eugene_Nier, you can read Eliezer’s take on a similar issue here. His Jefferson quote is particularly striking.