Out-of-body reasoning (OOBR)

The abbreviated title is a pun on “über”. As the über-theory, it has to be believed, right?

Outline

I’d like to suck all of the joy and intrigue out of Sleeping Beauty and related problems by claiming that a rational agent always, boringly, makes decisions thus:

  • Assume a God-like “out-of-body” mindset of defining an experiment in which you are the (programmable) subject, and a probability space or model representing the objective chances in that experiment.

  • Define a reward function mapping a decision strategy (program, if you will) and experimental outcomes to a real number, i.e. how well the strategy panned-out for each possible turn of events.

  • Choose your decision strategy to maximise the expected reward.

If you can define the model & experiment unambiguously, there is no need for credence: the rational agent does not use it for decision-making. Let’s put it this way:

  • Inner credence, being any statistic derived wholly from the model and the observed outcomes, does not influence rational decision-making. There are multiple natural ways of doing this, e.g. “halfer” vs. “thirder”, but the rational agent isn’t obliged to employ any of them.

If you cannot define the model/​experiment unambiguously, perhaps because you were given only partial information about your circumstances or you cannot compute the complex physical model that determines your world, then you must invoke your:

  • Outer credence, that assigns subjective probability to the various possible models/​experiments.

Then use your outer credence to define a new, hybrid model that says: pick a model/​experiment according to outer credence, then enact that experiment. Make a hybrid reward function to fit. Then, as before, choose a decision strategy maximising the expected reward.

Sleeping Beauty

See https://​​www.lesswrong.com/​​tag/​​sleeping-beauty-paradox.

The model is that a coin is flipped. The experiment involves waking Beauty some number of times depending on the coin flip, with drugs applied to force all wake-ups to be indistinguishable.

Her reward function depends on the terms of the experiment. Here are two experiments, each a mild variant of the original Sleeping Beauty problem:

  • Experiment 1. Coin is biased 1:2 (Heads:Tails); wake once in either case. Guess the coin (“H”/​”T”); receive $1 each time you’re right. The reward function should reflect your profit and your strategy should be chosen to maximise its expected value. So, you should guess “T”:

  • Experiment 2. Coin is biased 2:1 (Heads:Tails); wake four times on Tails, once on Heads. Guess the coin (“H”/​”T”); receive $1 at the end only if you made no mistakes, otherwise receive nothing. In this case, your strategy should be to guess “H”:

Provided that Beauty has been told which experiment she’s in, the rational decision is then obvious. Note that credence made no appearance in the analysis.

Credence, specifically outer credence, is required when Beauty is uncertain which experiment is in play. Let’s say she has credence for it being experiment 1. Then:

  • The (hybrid) model should be that with probability it’s experiment 1, else with probability it’s experiment 2.

  • The (hybrid) reward function is weighted accordingly (note: it’s important that the reward functions used the same scale/​”currency” originally; otherwise we have to rescale first):

Inner credence, specifically Thirder-style long-run averages, doesn’t do so well here. Beauty’s Thirder-style credence for , the proposition that the coin turned up Heads, is (irrespective of ); yet this is insufficient to guide the decision, guess “H” or guess “T”, that is in 1-1 correspondence with . To use Thirder-style credence, you need to consider it in the context of the experimental protocol: it’s a rational cul-de-sac.

The same sort of criticism can be levelled at Halfer-style credence as well, of course.

The Absent-Minded Driver

See https://​​www.lesswrong.com/​​posts/​​GfHdNfqxe3cSCfpHL/​​the-absent-minded-driver for example.

The experiment is that the driver can turn off at the first or second junction, or continue to the end, with rewards 0, 4 and 1 respectively, but he can’t distinguish the junctions and will forget about the first once he’s passed it. The model is that he’ll turn off any any given junction with probability (to be optimised).

It turns out that is optimal—the so-called planning optimal decision with respect to the reward function whose expectation given is

Various papers on AMD see it as a paradox that

where and are the circumstances that the driver is at the first and second junctions respectively, and is inner credence, in apparent violation of the law of total expectation. This is nonsense: and are not events partitioning a probability space and is not a probability, so the law of total expectation does not apply.

It’s worth repeating this: don’t use inner credence like a probability. A model that has probabilities is the one in which the driver has a chance of teleporting past junction (that had no reward) at the outset. Of course the expected reward in the teleportation experiment is higher than in the original AMD problem!

It’s amusing to note, however, that a natural form of inner credence is, in this case, sufficient to determine the driver’s strategy should information concerning the rewards be scrubbed from his brain. If his credence(*) for being at is then he should turn off with probability .

Do we think that’s how he operates, though, to deduce the turn-off probability from his inner credence on the road? No, he constructs his strategy before the journey starts. We only said that inner credence is unnecessary, not that it can’t sometimes be used in a contorted manner.

___
(*) If the turn-off probability is then, since is visited once per journey and is visited times, a fraction of the driver’s visits to ( or ) are to .

Doomsday

The Doomsday Argument says that if your (subjective) prior odds for imminent doom versus deferred doom is , and in the deferred case the human race will grow to times its current size, then your posterior odds, having considered how unlikely it is that you in particular, among all in the ultimate span of humanity, should be pondering the question, is -- a “Bayesian shift” towards imminent doom.

This is a question of outer credence (which model, doomed soon or doomed later?), entirely different in nature to the questions of irrelevant inner credence raised by Sleeping Beauty and the Absent-Minded Driver.

The problem with Doomsday is in the conditioning: how unlikely it is that you … should be pondering the question. Conditional probability works on events, so we can’t condition on the fact of the rational agent’s existence unless that existence is an outcome from some experiment. Attempting to model it in a way that supports the Bayesian shift, you end up with predeterministic nonsense like this:

  • Experiment: Humanity will ultimately extend to (fixed, unknown) live births. Chosen uniformly at random from among them are (fixed, unknown) rational agents.

  • Bayesian shift: YES.

As soon as you drop the requirement to see into the future, the Bayesian shift evaporates:

  • Experiment: Humanity will ultimately extend to (fixed, unknown) live births. Only those with attributes “close to ideal” (by some fixed but unknown measure) become rational agents.

  • Bayesian shift: NO.

I suspect that much of the confusion around Doomsday stems from the focus on what “I” should think about “my” circumstances. An OOBR-like focus on defining the experiment in which the rational agent is placed seems, to me, a cleaner way of approaching things.

No comments.