Game Theory without Argmax [Part 2]

Written during the SERI MATS program under the joint mentorship of John Wentworth, Nicholas Kees, and Janus.

Create a black and white image of a blindfolded robot and a blindfolded human playing Hungry Hungry Hippos in a 3:1 aspect ratio. The robot, with a futuristic, sleek, humanoid design, is seated at the game table with a blindfold covering its optical sensors. Across from the robot, depict a human player, also blindfolded, with an expression of concentration, as they engage in the game. The human

Motivation

I am surrounded, incessantly, by other people, possessing a range of different capabilities, striving for goals both common and opposing — and, what is more, they are incessantly surrounded by me. Fangs pierce the cartesian boundary. Outsideness leaks in. Insideness leaks out. It is the source of all joy and misery in my life.

Anyway. It’s the interaction of agents that characterises game theory.[1] Classical game theory says that when two utility-maximisers interact in a shared environment, they will choose options in nash equilibrium, where a profile of options is in nash equilibrium if no player can increase their utility function by unilaterally changing their option.

What does higher-order game theory (which dispenses with utility functions and maximisation) say about the interaction between agents? Would a satisficer cooperate with a quantiliser in prisoner’s dilemma?

Let’s find out.


Preliminaries

All you need to know is Definition 3 from [Part 1], and some basic classical game theory.

Definition 3: Let be any set of options and be any set of payoffs. An optimiser is any functional . A -task is any function . An option is -optimal for a task if and only if .

Let’s review the textbook definition of classical simultaneous games.

Definition 5.

  1. Let be a family of sets indexed by a set of players.

    A classical simultaneous game over is a function .

  2. The option-profiles of are the elements of the product .

  3. The -th deviation map is given by for all , , , and .

    In the finite case, where , then .

    The deviation map describes how the th player can change an option-profile by unilaterally changing their own option. Note that the deviation maps only depend on the option spaces, not the game itself.

  4. The best-response function of is the the function defined by where is the th deviation map.

  5. Finally, the nash equilibria of is the set of option-profiles such that .

Note that the fact that the option-profiles are in nash equilibrium follows logically from the assumption each player’s choice is utility-maximising for the task they face, plus the assumption that, if the option-profile is , then the th player faces the task .

It doesn’t require the assumption that the player’s have “common knowledge” of each other’s rationality, or that the players have any kinds of beliefs whatsoever. That being said, an economist might explain the optimality of the choices by assuming that each player is rational and knows all the relevant facts (i.e. the rules of the game, the rationality of their peers, and the beliefs of their peers). However, an evolutionary biologist might appeal to natural selection to explain optimality. An ML engineer might appeal to SGD, etc.


Higher-Order Nash Equilibrium

By inspection, we can see that at no point does the definition of the classical nash equilibrium appeal to any special properties of or . This suggests that we can effortless generalise the definition of nash equilibrium to any family of optimisers, rather than only utility-maximisers.

Definition 6.

  1. Let be a family of sets indexed by a set of players. Let be a set of payoffs, and let be a family of optimisers with .[2]

    A higher-order simultaneous -game is a function .

  2. Like before, the option-profiles of are the elements of the product .

  3. Like before, the -th deviation map is given by for all , , , and .

  4. The -best-response function of is the the function defined by where is the th deviation map

  5. Like before, the -nash equilibria of is the set of option-profiles such that .

When clear from context, I’ll just say game, best-response, and nash equilibrium.

The intuition is this —

  • Suppose that is the option-profile that’s collectively chosen.

  • Then player can achieve a payoff by unilaterally changing their option to because .

  • So player is faced with the task .

  • Their choice must be -optimal for this task, so .

  • If we assume -optimality for every player conjunctively, then .

  • If we know that , and we know nothing else, then the possible option profiles are .

This definition applies even to unusual cases —

  • Zero-player games? When , then game is just an element of the payoff space . There’s only one option-profile, the empty sequence, and it’s vacuously nash.

  • One-player games? When , then a game is just a task for the solitary player. An option-profile for the game is specified by an option for the player, and this option-profile is nash whenever the option is optimal for the player.

  • When is countably infinite, or indeed uncountably infinite, then the definition still makes sense and gives the right answer.[3]

We obtain the classical simultaneous game when and , and in this case the higher-order nash equilibrium coincides with the classical nash equilibrium.

The higher-order nash equilibrium allows us to answer somewhat esoteric questions —

An argmaxer, a satisficer, and a quantiliser walk into a bar. They can order from menus , , and respectively, and the barman will mix their three orders into a single cocktail . The argmaxer’s preferences over the cocktails are given by , and likewise for the satisfier and for the quantiliser.

What cocktail might they be served?

Well, the possible cocktails are such that , , and .

Many well-studied variations of the classical nash equilibrium can be seen as the higher-order nash equilibrium between non-classical optimisers. For example, epsilon-equilibrium is precisely the higher-order nash equilibrium between .[4]


Optimisation is nash-sum closed

There’s a nice upshot of higher-order game theory — the nash equilibria between optimisers is itself an optimiser!

Suppose that two agents are modelled by and . We can combine them into a single function which assigns, to each game , the set of -nash equilibria of the game . This function has type-signature , so it corresponds to a single optimiser with option space with payoff space .

This gives us a binary operator on optimisers,, given by .

This operator is both associative and commutative, justifying the name “nash sum”.

For example, the nash sum will calculate the classical nash equilibria of a generic payoff matrix .

The operator can be extended to any family of optimisers, i.e. where .

Terminology:

  • is a single optimiser, while is a family of optimisers.

  • is the nash-sum of , and are subagents of .

  • An option for is an option-profile for .

  • A task for the optimiser is a game for the family of optimisers .

  • An option is optimal for in the task if and only if the option-profile is a nash equilibrium for in the game .


Totality is not nash-sum closed

Just because two optimisers and are total does not imply that their nash sum is total.[5] In other words, we may have a model of an agent which works perfectly in all solitary situations, but breaks down when many such agents interact in a shared environment.

The textbook example is Matching Pennies. Each player must choose either heads or tails — player 1 wins if the coins match and player 2 wins if they don’t. If both players are utility-maximisers then there’s no nash equilibria, even though utility-maximisers are total optimisers.

HT
H
T

Formally speaking, , , and .[6] Then , although and for all .

This shows why we couldn’t have defined optimisers as functions with type-signature where denotes the set of non-empty subsets of . In order to ensure nash-sum closure of optimisers, we needed to include non-total optimisers.


Utility-maximisation is not nash-sum closed.

It is well-known that the nash sum of utility-maximisers isn’t a utility-maximiser for any . For many games, none of the nash equilibrium are even pareto-optimal!

The textbook case is the prisoner’s dilemma. Let and consider two players and which respectively value the first and second coordinate of the payoff. Their nash sum is the optimiser in which calculates the classical nash equilibria of 2-by-2 payoff matrices . In particular, when is the payoff matrix defined below, then . However, both players prefer the payoff to .


Consequentialism is not nash-sum closed.

In fact, the nash sum of two utility-maximisers isn’t even consequentialist![7]

Suppose Angelica is babysitting her cousin Tommy, and each child has been given a cookie by their grandmother. If Angelica is awake while Tommy sleeps, then she can steal both cookies for herself. If Angelica sleeps while Tommy is awake then he’ll cause mayhem getting both children in trouble. If Angelica and Tommy both stay sleep, or if they both wake up, then they’ll each keep one cookie.

The payoffs are summarised in the matrix below.

When Angelica and Tommy are both awake, this is nash — she doesn’t want to sleep because then he’ll cause mayhem, and he doesn’t want to sleep because then she’ll steal his cookie. In contrast, when they’re both sleeping, this isn’t nash — she can wake up and steal his cookie. But the same outcome will result whether Angelica and Tommy are both sleeping or both awake. So the nash sum of Angelica and Tommy isn’t a consequentialist optimiser![8]

This shows why we couldn’t have defined optimisers as functions with type-signature , which bakes-in the assumption that all optimisers are consequentialist. In order to ensure the nash-sum closure of optimisers, we needed to include nonconsequentialist optimisers.[9]


Beyond CDT?

Warning: This section is a speculative extension of the existing literature, it’s merely a proof-of-concept of how higher-order game theory would extend to non-CDT agents.

Let be the game defined by the payoff matrix below. We may readily check that , which corresponds to the well-known result that two utility-maximisers will defect in the prisoners’ dilemma.

Now, some decision theorists think that it’s not generally true that two rational utility-maximisers will defect in the prisoners’ dilemma — e.g. if they’re both perfect replicas of one another then they’ll both cooperate! The reasoning is this — cooperation from the first prisoner would count as evidence (to the first player) of high utility, and defection from the first prisoner would count as evidence of low utility. This corresponds to the observation that . So the first player should cooperate. Ditto the second player.

So what went wrong?

It all comes down to an ambiguity with the task . What relationship is the task supposed to capture exactly? The formalism itself is agnostic.

  • According to causal decision theory, the task is supposed to track which payoff is caused by the agent choosing .

  • According to evidential decision theory, the task is supposed to track which payoff is evidenced by the agent choosing .

So let’s explain our answer , which agrees with CDT. We can trace the problem to the deviation maps in the definition of .

The -th deviation map is given by for all , , , and .

The map is the causal dependence of the overall option-profile on the th player’s choice. Hence, when the game is the causal dependence of the overall payoff on the option-profile, it follows that the task is the causal dependence of the payoff on the th player’s choice. This is why we got the CDT answer — causation composed with causation is causation.

According to EDT, however, the task is supposed to capture the evidential dependency of the player’s choice on the payoff. And when the game is the evidential dependence of the overall payoff on the option-profile, it isn’t true that that the task is the evidential dependence of the payoff on the -th player’s choice. Evidence composed with causation isn’t evidence.

To achieve the EDT answer, we need is a function which captures the evidential dependence of the option-profile on the th player’s choice. In the case that all the players are replicas, . Replacing in the definition of with then we’d get a different binary operator which yields the EDT prediction, namely that .

It seems that we have another free parameter in our model! We can the replace the family of deviation maps with an arbitrary family of non-casual deviation maps , and thereby encode information about each player’s idiosyncratic decision theory. If the th player is CDT then and if the th player is EDT then .

Suppose we have a payoff space , a family of sets with , a family of optimisers where , and a family of decision theories where , and a game is a map . Then we can define the non-CDT best-response function of to be the function , and the non-CDT nash equilibria of to be the set . Or something like that.

With this machinery in place, we can answer even sillier questions —

An EDT utility-satisficer and a CDT utility-maximiser are playing the game . They are causally separated, but they are negligibly likely to choose different options. What options might they choose?[10]

Well, a pair of options is in (non-CDT) nash equilibrium if and only if and , where is the satisficer’s anchor point. We can also filter out the option profiles such that . The resulting list of option-profiles is easily computable from and .

Suppose that is the prisoner’s dilemma, and is the satisificer’s anchor point. Then the unique nash equilibrium is .

Exercise 7: If Angelica is an EDT utility-maximiser and Tommy is a CDT utility-maximiser, then which profiles are nash?


Recap

  • Classical game theory is the study agents which choose options causing maximum utility. When many such agents (CDT-utility-maximisers) make simultaneous choices they’ll collectively choose an option-profile in nash equilibrium, i.e. where is the best-response function.

  • Many AI safety researchers are worried about agents which choose options causing maximum utility, either because the word “cause” or “maximum” or “utility”, so they study agents which relax one (or many) of these three assumptions.

  • Today we introduced high-order game theory which generalises classical game theory by considering agents which don’t maximise utility. We generalised the classical nash equilibrium to arbitrarily many agents with arbitrary optimisers.

  • Finally, I gestured at how we might generalise the nash equilibrium even further to the simultaneous choices of non-CDT agents.

Pretty cool!


Next time...

By tomorrow, I will hopefully have posted about mesa-optimisation, sequential choice, and non-possibilistic models of agents.


  1. ^

    They study of -player games is called automata theory (in the discrete case) and dynamics (in the continuous case), while the study of -player games is called optimisation.

  2. ^

    Recall that denotes the set of functions of type .

    That is, if and , then .

  3. ^

    See Aumann’s 1964 paper Markets with a Continuum of Trader for a game with uncountable players.

  4. ^

    Recall that is the optimiser which maximises the function up to some fixed slack .

    That is, .

  5. ^

    Recall that an optimiser is total if for all .

  6. ^

    is the Kronecker delta function.

  7. ^

    Recall that an optimiser is consequentialist if for some .

    In other words, for any task , if then .

    This condition says that, once we know the agent’s task, then the optimality of a particular choice is determined by its payoff.

  8. ^

    Maybe there’s something deep here about group-level deontology arising from individual-level consequentialism.

  9. ^

    Warning: There is a remark that one regularly encounters in metaethics: moral consequentialism is tautological because we can consider the choice of the decision-maker as a consequence of the decision itself, ensuring that any decision-rule is consequentialist.

    This metaethical remark corresponds to the following fact in higher-order decision theory: if a nonconsequentialist optimiser faces the task then they will choose the same options as the consequentialist optimiser facing the task . It follows that whether an agent is a consequentialist heavily depends on which physical consequences of the decision we are tracking.

  10. ^

    Recall that a satisficer might choose any option which dominates some fixed option , i.e. . The option is called the anchor point.