I think game theory will never be reduced to decision theory, that’s like going back from Einstein to Newton. The contribution of my post is more about finding the natural boundaries of decision theory within game theory, and arguing that PD shouldn’t be included.
You said something in the post that I’m going to assume is closely related:
(Also this shows how Von Neumann-Morgenstern expected utility maximization is basically a restriction of UDT to single player games with perfect recall. For imperfect recall (AMD) or multiple players (Psy-Kosh) you need the full version.)
I think I have two points which may shift you on this:
If agents are using reflective oracles, which is to a certain extent a natural toy model of agents reasoning about each other (since it solves the grain of truth problem, allowing us to represent Bayesians who can reason about other agents in the same way they reason about everything, rather than in the game-theoretic way where there’s a special thing you do in order to reason about agents), then AIXI-like constructions will play Nash equilibria. IE, Nash equilibria are then just a consequence of maximizing expected utility.
There’s a sense in which correlated equilibria are what you get if you want game-theory to follow from individual rationality axioms rather than generalize them; this is argued in Correlated Equilibrium as an Expression of Bayesian Rationality by Aumann.
Yeah, that’s similar to how Benya explained reflective oracles to me years ago. It made me very excited about the approach back then. But at some point I realized that to achieve anything better than mutual defection in the PD, the oracle needs to have a “will of its own”, pulling the players toward Pareto optimal outcomes. So I started seeing it as another top-down solution to game theory, and my excitement faded.
Maybe not much point in trying to sway my position now, because there are already people who believe in cooperative oracles and more power to them. But this also reminds me of a conversation I had with Patrick several months before the Modal Combat paper came out. Everyone was pretty excited about it then, but I kept saying it would lead to a zoo of solutions, not some unique best solution showing the way forward. Years later, that’s how it played out.
We don’t have any viable attack on game theory to date, and I can’t even imagine what it could look like. In the post I tried to do the next best thing and draw a line: these problems are amenable to decision theory and these aren’t. Maybe if I get it just right, one day it will show me an opening.
Yeah, I also put a significant probability on the “there’s going to be a zoo of solutions” model of game theory. I suppose I’ve recently been more optimistic than usual about non-zoo solutions.
You said something in the post that I’m going to assume is closely related:
I think I have two points which may shift you on this:
If agents are using reflective oracles, which is to a certain extent a natural toy model of agents reasoning about each other (since it solves the grain of truth problem, allowing us to represent Bayesians who can reason about other agents in the same way they reason about everything, rather than in the game-theoretic way where there’s a special thing you do in order to reason about agents), then AIXI-like constructions will play Nash equilibria. IE, Nash equilibria are then just a consequence of maximizing expected utility.
There’s a sense in which correlated equilibria are what you get if you want game-theory to follow from individual rationality axioms rather than generalize them; this is argued in Correlated Equilibrium as an Expression of Bayesian Rationality by Aumann.
Yeah, that’s similar to how Benya explained reflective oracles to me years ago. It made me very excited about the approach back then. But at some point I realized that to achieve anything better than mutual defection in the PD, the oracle needs to have a “will of its own”, pulling the players toward Pareto optimal outcomes. So I started seeing it as another top-down solution to game theory, and my excitement faded.
Maybe not much point in trying to sway my position now, because there are already people who believe in cooperative oracles and more power to them. But this also reminds me of a conversation I had with Patrick several months before the Modal Combat paper came out. Everyone was pretty excited about it then, but I kept saying it would lead to a zoo of solutions, not some unique best solution showing the way forward. Years later, that’s how it played out.
We don’t have any viable attack on game theory to date, and I can’t even imagine what it could look like. In the post I tried to do the next best thing and draw a line: these problems are amenable to decision theory and these aren’t. Maybe if I get it just right, one day it will show me an opening.
Yeah, I also put a significant probability on the “there’s going to be a zoo of solutions” model of game theory. I suppose I’ve recently been more optimistic than usual about non-zoo solutions.