I think this shows clearly that dynamics don’t always lead to the same things as equilibrium rationality concepts. If someone is already convinced that the dynamics matter, this leads naturally to the thought that the equilibrium concepts are missing something important. But I think that at least some discussions of rationality (including some on this site) seem like they might be committed to some sort of “high road” idea under which it really is the equilibrium concept that is core to rationality, and that dynamics were at best a suggestive motivation. (I think I see this in some of the discussions of something like functional decision theory as “that decision theory that a perfectly rational agent would opt to self-program”, but with the idea that you don’t actually need to go through some process of self-re-programming to get there.)
Is there an argument to convince those people that the dynamics really are relevant to rationality itself, and not just to predictions of how certain naturalistic groups of limited agents will come to behave in their various local optima?
That’s exactly right. Results showing that low-rationality agents don’t always converge to a Nash equilibrium (NE) do not provide a compelling argument against the thesis that high-rationality agents do or should converge to NE. As you suggest, to address this question, one should directly model high-rationality agents and analyze their behavior.
We’d love to write another post on the high-rationality road at some point and would greatly appreciate your input!
Aumann & Brandenburger (1995),“The Epistemic Conditions for Nash Equilibrium,” and Stalnaker (1996),“Knowledge, Belief, and Counterfactual Reasoning in Games,” provide good analyses of the conditions for NE play in strategic games of complete and perfect information.
For games of incomplete information, Kalai and Lehrer (1993),“Rational Learning Leads to Nash Equilibrium,” demonstrate that when rational agents are uncertain about one another’s types, but their priors are mutually absolutely continuous, Bayesian learning guarantees in-the-limit convergence to Nash play in repeated games. These results establish a generous range of conditions—mutual knowledge of rationality and mutual absolute continuity of priors—that ensure convergence to a Nash equilibrium.
However, there are subtle limitations to this result. Foster & Young (2001),“On the Impossibility of Predicting the Behavior of Rational Agents,” show that in near-zero-sum games with imperfect information, agents cannot learn to predict one another’s actions and, as a result, do not converge to Nash play. In such games, mutual absolute continuity of priors cannot be satisfied.
I think this shows clearly that dynamics don’t always lead to the same things as equilibrium rationality concepts. If someone is already convinced that the dynamics matter, this leads naturally to the thought that the equilibrium concepts are missing something important. But I think that at least some discussions of rationality (including some on this site) seem like they might be committed to some sort of “high road” idea under which it really is the equilibrium concept that is core to rationality, and that dynamics were at best a suggestive motivation. (I think I see this in some of the discussions of something like functional decision theory as “that decision theory that a perfectly rational agent would opt to self-program”, but with the idea that you don’t actually need to go through some process of self-re-programming to get there.)
Is there an argument to convince those people that the dynamics really are relevant to rationality itself, and not just to predictions of how certain naturalistic groups of limited agents will come to behave in their various local optima?
That’s exactly right. Results showing that low-rationality agents don’t always converge to a Nash equilibrium (NE) do not provide a compelling argument against the thesis that high-rationality agents do or should converge to NE. As you suggest, to address this question, one should directly model high-rationality agents and analyze their behavior.
We’d love to write another post on the high-rationality road at some point and would greatly appreciate your input!
Aumann & Brandenburger (1995), “The Epistemic Conditions for Nash Equilibrium,” and Stalnaker (1996), “Knowledge, Belief, and Counterfactual Reasoning in Games,” provide good analyses of the conditions for NE play in strategic games of complete and perfect information.
For games of incomplete information, Kalai and Lehrer (1993), “Rational Learning Leads to Nash Equilibrium,” demonstrate that when rational agents are uncertain about one another’s types, but their priors are mutually absolutely continuous, Bayesian learning guarantees in-the-limit convergence to Nash play in repeated games. These results establish a generous range of conditions—mutual knowledge of rationality and mutual absolute continuity of priors—that ensure convergence to a Nash equilibrium.
However, there are subtle limitations to this result. Foster & Young (2001), “On the Impossibility of Predicting the Behavior of Rational Agents,” show that in near-zero-sum games with imperfect information, agents cannot learn to predict one another’s actions and, as a result, do not converge to Nash play. In such games, mutual absolute continuity of priors cannot be satisfied.