Reflections on Pre-Rationality

This continues my previous post on Robin Hanson’s pre-rationality, by offering some additional comments on the idea.

The reason I re-read Robin’s paper recently was to see if it answers a question that’s related to another of my recent posts: why do we human beings have the priors that we do? Part of that question is why are our priors pretty close to each other, even if they’re not exactly equal. (Technically we don’t have priors because we’re not Bayesians, but we can be approximated as Bayesians, and those Bayesians have priors.) If we were created by a rational creator, then we would have pre-rational priors. (Which, since we don’t actually have pre-rational priors, seems to be a good argument against us having been created by a rational creator. I wonder what Aumann would say about this?) But we have other grounds for believing that we were instead created by evolution, which is not a rational process, in which case the concept doesn’t help to answer the question, as far as I can see. (Robin never claimed that it would, of course.)

The next question I want to consider is a normative one: is pre-rationality rational? Pre-rationality says that we should reason as if we were pre-agents who learned about our prior assignments as information, instead of just taking those priors as given. But then, shouldn’t we also act as if we were pre-agents who learned about our utility function assignments as information, instead of taking them as given? In that case, we’re led to the conclusion that we should all have common utility functions, or at least that pre-rational agents should have values that are much less idiosyncratic than ours. This seems to be a reductio ad absurdum of pre-rationality, unless there is an argument why we should apply the concept of pre-rationality only to our priors, and not to our utility functions. Or is anyone tempted to bite this bullet and claim that we should apply pre-rationality to our utility functions as well? (Note that if we were created by a rational creator, then we would have common utility functions.)

The last question I want to address is one that I already raised in my previous post. Assuming that we do want to be pre-rational, how do we move from our current non-pre-rational state to a pre-rational one? This is somewhat similar to the question of how do we move from our current non-rational (according to ordinary rationality) state to a rational one. Expected utility theory says that we should act as if we are maximizing expected utility, but it doesn’t say what we should do if we find ourselves lacking a prior and a utility function (i.e., if our actual preferences cannot be represented as maximizing expected utility).

The fact that we don’t have good answers for these questions perhaps shouldn’t be considered fatal to pre-rationality and rationality, but it’s troubling that little attention has been paid to them, relative to defining pre-rationality and rationality. (Why are rationality researchers more interested in knowing what rationality is, and less interested in knowing how to be rational? Also, BTW, why are there so few rationality researchers? Why aren’t there hordes of people interested in these issues?)

As I mentioned in the previous post, I have an idea here, which is to apply some concepts related to UDT, in particular Nesov’s trading across possible worlds idea. As I see it now, pre-rationality is mostly about the (alleged) irrationality of disagreements between counterfactual versions of the same agent, when those disagreements are caused by irrelevant historical accidents such as the random assortment of genes. But how can such agents reach an agreement regarding what their beliefs should be, when they can’t communicate with each other and coordinate physically? Well, at least in some cases, they may be able to coordinate logically. In my example of an AI whose prior was picked by the flip of a coin, the two counterfactual versions of the AI are similar enough to each other and symmetrical enough, for each to infer that if it were to change its prior from O or P to Q, where Q(A=heads)=0.5, the other AI would do the same, but this inference wouldn’t be true for any Q’ != Q, due to lack of symmetry.

Of course, in the actual UDT, such “changes of prior” do not literally occur, because coordination and cooperation between possible worlds happen naturally as part of deciding acts and strategies, while one’s preferences stay constant. Is that sufficient, or do we really need to change our preferences and make them pre-rational? I’m not sure.