Real-world Newcomb-like Problems

Elaboration of: A point I’ve made before.

Summary: I phrase a variety of realistic dilemmas so as to show how they’re similar to Newcomb’s problem.

Problem: Many LW readers don’t understand why we bother talking about obviously-unrealistic situations like Counterfactual Mugging or Newcomb’s problem. Here I’m going to put them in the context of realistic dilemmas, identifying the common thread, so that the parallels are clear and you can see how Counterfactual Mugging et al. are actually highlighting relevant aspects of real-world problems—even though they may do it unrealistically.

The common thread across all the Newcomblike problems I will list is this: “You would not be in a position to enjoy a larger benefit unless you would cause [1] a harm to yourself within particular outcome branches (including bad ones).” Keep in mind that a “benefit” can include probabilistic ones (so that you don’t always get the benefit by having this propensity). Also, many of the relationships listed exist because your decisions are correlated with others’.

Without further ado, here is a list of both real and theoretical situations, in rough order from most to least “real-world”ish:

Natural selection: You would not exist as an evolution-constructed mind unless you would be willing to cause the spreading of your genes at the expense of your life and leisure. (I elaborate here.)

Expensive punishment: You would not be in the position of enjoying a crime level this low unless you would cause a net loss to yourself to punish crimes when they do happen. (My recent comments on the matter.)

Mutually assured destruction” tactics: You would not be in the position of having a peaceful enemy unless you would cause destruction of both yourself and the enemy in those cases where the enemy attacks.

Voting: You would not be in a polity where humans (rather than “lizards”) rule over you unless you would cause yourself to endure the costs of voting despite the slim chance of influencing the outcome.

Lying: You would not be in the position where your statements influence others’ beliefs unless you would be willing state true things that are sub-optimal to you for others to believe. (Kant/​Categorical Imperative name-check)

Cheating on tests: You would not be in the position to reap the (larger) gains of being able to communicate your ability unless you would forgo the benefits of an artificially-high score. (Kant/​Categorical Imperative name-check)

Shoplifting: You would not be in the position where merchants offer goods of this quality, with this low of a markup and this level of security lenience unless you would pass up the opportunity to shoplift even when you could get away with it, or at least have incorrect beliefs about the success probability that lead you to act this way. (Controversial—see previous discussion.)

Hazing/​abuse cycles: You would not be in the position to be unhazed/​unabused (as often) by earlier generations unless you would forgo the satisfaction of abusing later generations when you had been abused.

Akrasia/​addiction: You would not be addiction- and bad habit-free unless you would cause the pain of not feeding the habit during the existence-moments when you do have addictions and bad habits.

Absent-Minded Driver: You would not ever have the opportunity to take the correct exit unless you would sometimes drive past it.

Parfit’s Hitchhiker: You would not be in the position of surviving the desert unless you would cause the loss of money to pay the rescuer.

Newcomb’s problem: You would not be in the position of Box #2 being filled unless you would forgo the contents of Box #1.

Newcomb’s problem with transparent boxes: Ditto, except that Box #2 isn’t always filled.

Prisoner’s Dilemma: You would not be in the position of having a cooperating partner unless you would cause the diminished “expected prison avoidance” by cooperating yourself.

Counterfactual Mugging: You would not ever be in the position of receiving lots of free money unless you would cause yourself to lose less money in those cases where you lose the coin flip.

[1] “Cause” is used here in the technical sense, which requires the effect to be either in the future, or, in timeless formalisms, a descendent of the minimal set (in a Bayesian network) that screens off knowledge about the effect. In the parlance of Newcomb’s problem, it may feel intuitive to say that “one-boxing causes Box #2 to be filled”, but this is not correct in the technical sense.