My thinking is somewhat similar to Vanessa’s. I think a full explanation would require a long post in itself. It’s related to my recent thinking about UDT and commitment races. But, here’s one way of arguing for the approach in the abstract.
Assuming that we do want to be pre-rational, how do we move from our current non-pre-rational state to a pre-rational one? This is somewhat similar to the question of how do we move from our current non-rational (according to ordinary rationality) state to a rational one. Expected utility theory says that we should act as if we are maximizing expected utility, but it doesn’t say what we should do if we find ourselves lacking a prior and a utility function (i.e., if our actual preferences cannot be represented as maximizing expected utility).
The fact that we don’t have good answers for these questions perhaps shouldn’t be considered fatal to pre-rationality and rationality, but it’s troubling that little attention has been paid to them, relative to defining pre-rationality and rationality. (Why are rationality researchers more interested in knowing what rationality is, and less interested in knowing how to be rational? Also, BTW, why are there so few rationality researchers? Why aren’t there hordes of people interested in these issues?)
My contention is that rationality should be about the update process. It should be about how you adjust your position. We can have abstract rationality notions as a sort of guiding star, but we also need to know how to steer based on those.
Some examples:
Logical induction can be thought of as the result of performing this transform on Bayesianism; it describes belief states which are not coherent, and gives a rationality principle about how to approach coherence—rather than just insisting that one must somehow approach coherence.
Evolutionary game theory is more dynamic than the Nash story. It concerns itself more directly with the question of how we get to equilibrium. Strategies which work better get copied. We can think about the equilibria, as we do in the Nash picture; but, the evolutionary story also lets us think about non-equilibrium situations. We can think about attractors (equilibria being point-attractors, vs orbits and strange attractors), and attractor basins; the probability of ending up in one basin or another; and other such things.
However, although the model seems good for studying the behavior of evolved creatures, there does seem to be something missing for artificial agents learning to play games; we don’t necessarily want to think of there as being a population which is selected on in that way.
The complete class theorem describes utility-theoretic rationality as the end point of taking Pareto improvements. But, we could instead think about rationality as the process of taking Pareto improvements. This lets us think about (semi-)rational agents whose behavior isn’t described by maximizing a fixed expected utility function, but who develop one over time. (This model in itself isn’t so interesting, but we can think about generalizing it; for example, by considering the difficulty of the bargaining process—subagents shouldn’t just accept any Pareto improvement offered.)
Again, this model has drawbacks. I’m definitely not saying that by doing this you arrive at the ultimate learning-theoretic decision theory I’d want.
My thinking is somewhat similar to Vanessa’s. I think a full explanation would require a long post in itself. It’s related to my recent thinking about UDT and commitment races. But, here’s one way of arguing for the approach in the abstract.
You once asked:
My contention is that rationality should be about the update process. It should be about how you adjust your position. We can have abstract rationality notions as a sort of guiding star, but we also need to know how to steer based on those.
Some examples:
Logical induction can be thought of as the result of performing this transform on Bayesianism; it describes belief states which are not coherent, and gives a rationality principle about how to approach coherence—rather than just insisting that one must somehow approach coherence.
Evolutionary game theory is more dynamic than the Nash story. It concerns itself more directly with the question of how we get to equilibrium. Strategies which work better get copied. We can think about the equilibria, as we do in the Nash picture; but, the evolutionary story also lets us think about non-equilibrium situations. We can think about attractors (equilibria being point-attractors, vs orbits and strange attractors), and attractor basins; the probability of ending up in one basin or another; and other such things.
However, although the model seems good for studying the behavior of evolved creatures, there does seem to be something missing for artificial agents learning to play games; we don’t necessarily want to think of there as being a population which is selected on in that way.
The complete class theorem describes utility-theoretic rationality as the end point of taking Pareto improvements. But, we could instead think about rationality as the process of taking Pareto improvements. This lets us think about (semi-)rational agents whose behavior isn’t described by maximizing a fixed expected utility function, but who develop one over time. (This model in itself isn’t so interesting, but we can think about generalizing it; for example, by considering the difficulty of the bargaining process—subagents shouldn’t just accept any Pareto improvement offered.)
Again, this model has drawbacks. I’m definitely not saying that by doing this you arrive at the ultimate learning-theoretic decision theory I’d want.