# awenonian comments on Introduction To The Infra-Bayesianism Sequence

• I’m glad to hear that the question of what hypotheses produce actionable behavior is on people’s minds.

I modeled Murphy as an actual agent, because I figured a hypothesis like “A cloaked superintelligence is operating the area that will react to your decision to do X by doing Y” is always on the table, and is basically a template for allowing Murphy to perform arbitrary action Y.

I feel like I didn’t quite grasp what you meant by “a constraint on Murphy is picked according to this probability distribution/​prior, then Murphy chooses from the available options of the hypothesis they picked”

But based on your explanation after, it sounds like you essentially ignore hypotheses that don’t constrain Murphy, because they act as an expected utility drop on all states, so it just means you’re comparing −1,000,000 and −999,999, instead of 0 and 1. For example, there’s a whole host of hypotheses of the form “A cloaked superintelligence converts all local usable energy into a hellscape if you do X”, and since that’s a possibility for every X, no action X is graded lower than the others by its existence.

That example is what got me thinking, in the first place, though. Such hypotheses don’t lower everything equally, because, given other Laws of Physics, the superintelligence would need energy to hell-ify things. So arbitrarily consuming energy would reduce how bad the outcomes could be if a perfectly misaligned superintelligence was operating in the area. And, given that I am positing it as a perfectly misaligned superintelligence, we should both expect it to exist in the environment Murphy chooses (what could be worse?) and expect any reduction of its actions to be as positive of changes as a perfectly aligned superintelligence’s actions could be, since preventing a maximally detrimental action should match, in terms of Utility, enabling a maximally beneficial action. Therefore, entropy-bombs.

Thinking about it more, assuming I’m not still making a mistake, this might just be a broader problem, not specific to this in any way. Aren’t I basically positing Pascal’s Mugging?

Anyway, thank you for replying. It helped.

• You’re completely right that hypotheses with unconstrained Murphy get ignored because you’re doomed no matter what you do, so you might as well optimize for just the other hypotheses where what you do matters. Your “-1,000,000 vs −999,999 is the same sort of problem as 0 vs 1” reasoning is good.

Again, you are making the serious mistake of trying to think about Murphy verbally, rather than thinking of Murphy as the personification of the “inf” part of the definition of expected value, and writing actual equations. is the available set of possibilities for a hypothesis. If you really want to, you can think of this as constraints on Murphy, and Murphy picking from available options, but it’s highly encouraged to just work with the math.

For mixing hypotheses (several different sets of possibilities) according to a prior distribution , you can write it as an expectation functional via (mix the expectation functionals of the component hypotheses according to your prior on hypotheses), or as a set via (the available possibilities for the mix of hypotheses are all of the form “pick a possibility from each hypothesis, mix them together according to your prior on hypotheses”)

This is what I meant by “a constraint on Murphy is picked according to this probability distribution/​prior, then Murphy chooses from the available options of the hypothesis they picked”, that set (your mixture of hypotheses according to a prior) corresponds to selecting one of the sets according to your prior , and then Murphy picking freely from the set .

Using (and considering our choice of what to do affecting the choice of , we’re trying to pick the best function ) we can see that if the prior is composed of a bunch of “do this sequence of actions or bad things happen” hypotheses, the details of what you do sensitively depend on the probability distribution over hypotheses. Just like with AIXI, really.
Informal proof: if and (assuming ), then we can see that

and so, the best sequence of actions to do would be the one associated with the “you’re doomed if you don’t do blahblah action sequence” hypothesis with the highest prior. Much like AIXI does.

Using the same sort of thing, we can also see that if there’s a maximally adversarial hypothesis in there somewhere that’s just like “you get 0 reward, screw you” no matter what you do (let’s say this is psi_0), then we have

And so, that hypothesis drops out of the process of calculating the expected value, for all possible functions/​actions. Just do a scale-and-shift, and you might as well be dealing with the prior , which a-priori assumes you aren’t in the “screw you, you lose” environment.

Hm, what about if you’ve just got two hypotheses, one where you’re like “my knightian uncertainty scales with the amount of energy in the universe so if there’s lots of energy available, things could e really bad, while if there’s little energy available, Murphy can’t make things bad” () and one where reality behaves pretty much as you’d expect it to(? And your two possible options would be “burn energy freely so Murphy can’t use it” (the choice , attaining a worst-case expected utility of in and in ), and “just try to make things good and don’t worry about the environment being adversarial” (the choice , attaining 0 utility in , 1 utility in ).

The expected utility of (burn energy) would be
And the expected utility of (act normally) would be

So “act normally” wins if , which can be rearranged as . Ie, you’ll act normally if the probability of “things are normal” times the loss from burning energy when things are normal exceeds the probability of “Murphy’s malice scales with amount of available energy” times the gain from burning energy in that universe.
So, assuming you assign a high enough probability to “things are normal” in your prior, you’ll just act normally. Or, making the simplifying assumption that “burn energy” has similar expected utilities in both cases (ie, ), then it would come down to questions like “is the utility of burning energy closer to the worst-case where Murphy has free reign, or the best-case where I can freely optimize?”
And this is assuming there’s just two options, the actual strategy selected would probably be something like “act normally, if it looks like things are going to shit, start burning energy so it can’t be used to optimize against me”

Note that, in particular, the hypothesis where the level of attainable badness scales with available energy is very different from the “screw you, you lose” hypothesis, since there are actions you can take that do better and worse in the “level of attainable badness scales with energy in the universe” hypothesis, while the “screw you, you lose” hypothesis just makes you lose. And both of these are very different from a “you lose if you don’t take this exact sequence of actions” hypothesis.

Murphy is not a physical being, it’s a personification of an equation, thinking verbally about an actual Murphy doesn’t help because you start confusing very different hypotheses, think purely about what the actual set of probability distributions corresponding to hypothesis looks like. I can’t stress this enough.

Also, remember, the goal is to maximize worst-case expected value, not worst-case value.