# Geometric Rationality is Not VNM Rational

One elephant in the room throughout my geometric rationality sequence, is that it is sometimes advocating for randomizing between actions, and so geometrically rational agents cannot possibly satisfy the Von Neumann–Morgenstern axioms. That is correct: I am rejecting the VNM axioms. In this post, I will say more about why I am making such a bold move.

# A Model of Geometric Rationality

I have been rather vague on what I mean by geometric rationality. I still want to be vague in general, but for the purposes of this post, I will give a concrete definition, and I will use the type signature of the VNM utility theorem. (I do not think this definition is good enough, and want it to restrict its scope to this post.)

A preference ordering on lotteries over outcomes is called geometrically rational if there exists some probability distribution over interval valued utility functions on outcomes such that if and only if .

For comparison, an agent is VNM rational there exists a single utility function U, such that if and only if .

Geometric Rationality is weaker than VNM rationality, since under reasonable assumptions, we can assume the utility function of a VNM rational agent is interval valued, and then we can always take the probability distribution that assigns probability 1 to this utility function.

Geometric Rationality is strictly weaker, because it sometimes strictly prefers lotteries over any of the deterministic outcomes, and VNM rational agents never do this.

The VNM utility theorem says that any preference ordering on lotteries that satisfies some simple axioms must be VNM rational (i.e. have a utility function as above). Since I am advocating for a weaker notion of rationality, I must reject some of these axioms.

# Against Independence

The VNM axiom that I am rejecting is the independence axiom. It states that given lotteries A, B, and C, and probability p, if and only if . Thus, mixing in a probability of will not change my preference between and .

Let us go through an example.

Alice and Bob are a married couple. They are trying to decide where to move, buy a house, and live for the rest of their lives. Alice prefers Atlanta, Bob prefers Boston. The agent I am modeling here is the married couple consisting of Alice and Bob.

Bob’s preference for Boston is sufficiently stronger than Alice’s preference for Atlanta, that given only these options, they would move to Boston ().

Bob is presented with a unique job opportunity, where he (and Alice) can move to California, and try to save the world. However, he does not actually have a job offer yet. They estimate an 80 percent chance that he will get a job offer next week. Otherwise, they will move to Atlanta or Boston.

California is a substantial improvement for Bob’s preferences over either of the other options. For Alice, it is comparable to Boston. Alice and Bob are currently deciding on a policy of what to do conditional on getting and not getting the offer. It is clear that if they get the offer, they will move to California. However, they figure that since Bob’s preferences are in expectation being greatly satisfied in the 80 percent of worlds where they are in California, they should move to Atlanta if they do not get the offer ().

Alice and Bob are collectively violating the independence axiom, and are not VNM rational. Are they making a mistake? Should we not model them as irrational due to their weird obsession with fairness?

# Dutch Books and Updatelessness

You might claim that abandoning the independence axiom opens up Alice and Bob up to get Dutch booked. The argument would go as follows. First, you offer Alice and Bob a choice between two policies:

Policy CA: California if possible, otherwise Atlanta, and

Policy CB: California if possible, otherwise Boston.

They choose policy CA. Then, you reveal that they did not get the job offer, and will have to move to Atlanta. You offer them to pay you a penny to instead be able to move to Boston. In this way, you extract free money from them!

The problem is they don’t want to switch to Boston, they are happy moving to Atlanta. Bob’s preferences are being extra satisfied in the other possible worlds where he is in California. He can take a hit in this world.

If California did not exist, they would want to move to Boston, and would pay a penny to move to Boston rather than Atlanta. The problem is that they are being updateless. When they observe they cannot choose California, they do not fully update on this fact and pretend that the good California worlds do not exist. Instead they follow through with the policy that they agreed to initially.

We can take this further, and pretend that they didn’t even consider Atlanta vs Boston. They just got a job offer, and decided to move to California. Then all the world saving money disappeared over night, the job offer was retracted, and Alice and Bob are newly considering Atlanta vs Boston. They might reason, that if they would have taken the time to consider this possibility up front, they would have chosen Atlanta, so they follow through the policy that they would have chosen if they would have thought about it more in advance.

They have a preference for fairness, and this preference is non-local. It cares about what happens in other worlds.

I gave the above example about a married couple, because it made it cleaner to understand the desire for fairness. However, I think that it makes sense for individual humans to act this way with respect to their various different types of preferences.

- Why The Focus on Expected Utility Maximisers? by 27 Dec 2022 15:49 UTC; 116 points) (
- Local Memes Against Geometric Rationality by 21 Dec 2022 3:53 UTC; 90 points) (
- A simple case for extreme inner misalignment by 13 Jul 2024 15:40 UTC; 85 points) (
- Coalitional agency by 22 Jul 2024 0:09 UTC; 56 points) (
- 4. Existing Writing on Corrigibility by 10 Jun 2024 14:08 UTC; 47 points) (
- EA & LW Forums Weekly Summary (28th Nov − 4th Dec 22′) by 6 Dec 2022 9:38 UTC; 36 points) (EA Forum;
- Trying to deconfuse some core AI x-risk problems by 17 Oct 2023 18:36 UTC; 34 points) (
- Geometric Utilitarianism (And Why It Matters) by 12 May 2024 3:41 UTC; 26 points) (
- Proving the Geometric Utilitarian Theorem by 7 Aug 2024 1:39 UTC; 25 points) (
- The Case for Convexity by 9 Aug 2023 14:09 UTC; 19 points) (
- Local Memes Against Geometric Rationality by 21 Dec 2022 3:53 UTC; 18 points) (EA Forum;
- EA & LW Forums Weekly Summary (28th Nov − 4th Dec 22′) by 6 Dec 2022 9:38 UTC; 10 points) (
- 22 Jan 2024 2:09 UTC; 5 points) 's comment on A Shutdown Problem Proposal by (
- 2 Oct 2024 12:16 UTC; 5 points) 's comment on Why Not Subagents? by (
- 21 Feb 2023 4:52 UTC; 2 points) 's comment on There are no coherence theorems by (EA Forum;
- 6 Jul 2023 15:21 UTC; 2 points) 's comment on Why Not Subagents? by (
- 19 Apr 2024 20:02 UTC; 1 point) 's comment on Should we maximize the Geometric Expectation of Utility? by (

Hi! I just wanted to mention that I

reallyappreciate this sequence. I’ve been having lots of related thoughts, and it’s great to see a solid theoretical grounding for them. I find the notion that bargaining can happen across lots of different domains—different people or subagents, different states of the world, maybe different epistemic states—particularly useful. And this particular post presents the only argument for rejecting a VNM axiom I’ve ever found compelling. I think there’s a decent chance that this sequence will become really foundational to my thinking.Thanks!

I don’t know that I have much “solid theoretical grounding.” From my perspective, this sequence is me putting together a bunch of related concepts (and thus doing some of the hard parts of noticing that they are related), but not really giving good theoretical grounding. In fact, I was putting off posting this sequence, so I could have time to develop theoretical grounding, but then gave up on that and just posted what I had in response to the community wanting orientation around FTX.

Thanks for posting these, and I’m looking forward to the grounding posts should you find time.

This reminds me of an example I described in this SL4 post:

I remember my reaction when first reading this was “both AIs delegate their power, then a jointly trusted coinflip is made, then a new AI is constructed which maximizes one of the utility functions”. That seems to solve the problem in general.

Why does the probability need to be close to 50% for the AI to agree to the merger? Shouldn’t its threshold for agreeing to the merger depend on how likely one or the other AI is to beat the other in a war for the accessible universe?

Is there an assumption that the two AIs are roughly equally powerful, and that a both-lose scenario is relatively unlikely?

It is first past the post, minorities get nothing. There might be an implicit assumption that the created new agent agrees with probablities with the old agents. 49% plausible papperclips, 51% plausible staples will act 100% staple and does not serve at all for paperclips.

Ah, maybe the way to think about it is that if I think I have a 30% chance of success before the merger, then I need to have a 30%+epsilon chance of

my goal being chosenafter the merger. And my goal will only be chosen if it is estimated to have the higher chance of success.And so, if we assume that the chosen goal is def going to succeed post-merger (since there’s no destructive war), that means I need to have a 30%+epsilon chance that my goal has a >50% chance of success post-merger. Or in other words “a close to 50% probability of success”, just as Wei said.

I’m confused by the “no dutch book” argument. Pre-California-lottery-resolution, we’ve got CB≺CA, but post-California-lottery-resolution we simultaneously still have A≺B and “we refuse any offer to switch from B to A”, which makes me very uncertain what ≺ means here.

Is this just EDT vs UDT again, or is the post-lottery A≺B subtly distinct from the pre-lottery one, or is “if you see yourself about to be dutch-booked, just suck it up and be sad” a generally accepted solution to otherwise being DB’d, or something else?

I think it is EDT vs UDT. We prefer B to A, but we prefer CA to CB, not because of dutch books, but because CA is good enough for Bob to be fair, and A is not good enough for Bob.

...huh. So UDT in general gets to just ignore the independence axiom because:

UDT’s whole shtick is credibly pre-committing to seemingly bad choices in some worlds in order to get good outcomes in others, and/or

UDT is optimizing over

policiesrather than actions, and I guess there’s nothing stopping us having preferences overproperties of the policylike fairness (instead of only ordering policies by their “ground level” outcomes).And this is where G comes in, it’s one way of encoding something-like-fairness.

Sound about right?

yep

I find this example interesting but very weird. The couple is determining fairness by using “probability mass of happiness” as the unit of account. But it seems very natural to me to go one step further and adjust for the actual outcomes, investing more resources into the sub-agent that has worse luck.

I don’t know if this is technically doable (I foresee complications with asymmetric utility functions of the two sub-agents, where one is harder to satisfy than the other, or even just has more variance in outcomes), but I think such an adjustment should recover the VNM independence condition.

Figure I should put this somewhere: I recently ran into some arguments from Lara Buchak that were similar to this (podcast: https://www.preposterousuniverse.com/podcast/2022/12/12/220-lara-buchak-on-risk-and-rationality/)

From listening to that podcast, it seems like even she would not advocate for preferring a lottery between two outcomes to either of the pure components.

See also: https://www.lesswrong.com/posts/qij9v3YqPfyur2PbX/indexical-uncertainty-and-the-axiom-of-independence for an argument against independence

Note that I tentatively think this will be the last post in the Geometric Rationality sequence.

I am confused about something. You write that a preference ordering L⪯M is geometrically rational ifGU∼PEO∼LU(O)≤GU∼PEO∼MU(O).

This is compared to VNM rationality which favours L⪯M if and only if EO∼LU(O)≤EO∼MU(O).

Why, in the the definition of geometric rationality, do we have both the geometric average and the arithmetic average? Why not just say “an ordering is geometrically rational if it favours L⪯M if and only if GO∼LU(O)≤GO∼MU(O) ” ?

As I understand it, this is what Kelly betting does. It doesn’t favour lotteries over either outcome, but it does reject the VNM continuity axiom, rather than the independence axiom.

These are super interesting ideas, thanks for writing the sequence!

I’ve been trying to think of toy models where the geometric expectation pops out—here’s a partial one, which is about conjunctivity of values:

Say our ultimate goal is to put together a puzzle (U = 1 if we can, U = 0 if not), for which we need 2 pieces. We have sub-agents A and B who care about the two pieces respectively, each of whose utility for a state is its probability estimates for finding its piece there. Then our expected utility for a state is the product of their utilities (assuming this is a one-shot game, so we need to find both pieces at once), and so our decision-making will be geometrically rational.

This easily generalizes to an N-piece puzzle. But, I don’t know how to extend this interpretation to allow for unequal weighing of agents.

Another setting that seems natural and gives rise to multiplicative utility is if we are trying to cover as much of a space as possible, and we divide it dimension-wise into subspace, each tracked by a subagent. To get the total size covered, we multiply together the sizes covered within each subspace.

We can kinda shoehorn unequal weighing in here if we have each sub-agent track not just the fractional or absolute coverage of their subspace, but the per-dimension geometric average of their coverage.

For example, say we’re trying to cover a 3D cube that’s 10x10x10, with subagent A minding dimension 1 and subagent B minding dimensions 2 and 3. A particular outcome might involve A having

^{4}⁄_{10}coverage and B having^{81}⁄_{100}coverage, for a total coverage of (4/10)*(81/100), which we could also phrase as (4/10)*(9/10)^2.I’m not sure how to make uncertainty work correctly within each factor though.

How does this work with Kelly betting? There, aren’t the relevant utility functions going to be either linear or logarithmic in wealth?

Yeah, I think this definition is more centrally talking about Nash bargaining than Kelly betting. Kelly betting can be expressed as maximizing a utility function that is logarithmic in wealth, and so can be seen as VNM rational

It’s not just VNM; it just doesn’t even make logical sense. Probabilities are about your knowledge, not the state of the world: barring bizarre fringe cases/Cromwell’s law, I can always say that whatever I’m doing has probability 1, because I’m

currently doing it,meaning it’s physically impossible to randomize your own actions. I can certainly have a probability other than 0 or 1 that Iwilldo something, if this action depends on information I haven’t received. But as soon as I receive all the information involved in making my decision and update on it, I can’t have a 50% chance of doing something. Trying to randomize your own actions involves refusing to update on the information you have, a violation of Bayes’ theorem.In

thisworld, the one that actually exists, Bob still wants to move to Boston. The fact that Bob made a promise and would now face additional costs associated with breaking the contract (i.e. upsetting Alice) doesn’t change the fact that he’d be happier in Boston, it just means that the contract and the action of revealing this information changed the options available. The choices are no longer “Boston” vs. “Atlanta,” they’re “Boston and upset Alice” vs. “Atlanta and don’t upset Alice.”Moreover, holding to this contract after the information is revealed also rejects the possibility of a Pareto improvement (equivalent to a Dutch book). Say Alice and Bob agree to randomize their choice as you say. In this case, both Alice and Bob are strictly worse off than if they had agreed on an insurance policy. A contract that has Bob more than compensate Alice for the cost of moving to Boston if the California option fails would leave

bothof them strictly better off.So, I am trying to talk about the preferences of the couple, not the preferences of either individual. You might reject that the couple is capable of having preference, if so I am curious if you think Bob is capable of having preferences, but not the couple, and if so, why?

I agree if you can do arbitrary utility transfers between Alice and Bob at a given exchange rate, then they should maximize the sum of their utilities (at that exchange rate), and do a side transfer. However, I am assuming here that efficient compensation is not possible. I specifically made it a relatively big decision, so that compensation would not obviously be possible.

Whether the couple is capable of having preferences probably depends on your definition of “preferences.” The more standard terminology for preferences by a group of people is “social choice function.” The main problem we run into is that social choice functions don’t behave like preferences.

I wrote a post about ex ante prioritarianism some time ago, with some other references that might be of interest: https://forum.effectivealtruism.org/posts/bqcxp57hTybusvcqp/ex-ante-prioritarianism-and-negative-leaning-utilitarianism-1

More recent objection, probably basically a money pump (I haven’t read the article): “In this article, I argue that Ex-Ante Prioritarianism suffers from much the same problem: it violates a sequential version of Ex-Ante Pareto, that is, it prescribes sequences of choices that worsen the expectations for everyone.” https://www.cambridge.org/core/journals/utilitas/article/exante-prioritarianism-violates-sequential-exante-pareto/EC2F27EC7F39D4BC009AC76C86F1C7F7