Reflections on Pre-Rationality

This con­tinues my pre­vi­ous post on Robin Han­son’s pre-ra­tio­nal­ity, by offer­ing some ad­di­tional com­ments on the idea.

The rea­son I re-read Robin’s pa­per re­cently was to see if it an­swers a ques­tion that’s re­lated to an­other of my re­cent posts: why do we hu­man be­ings have the pri­ors that we do? Part of that ques­tion is why are our pri­ors pretty close to each other, even if they’re not ex­actly equal. (Tech­ni­cally we don’t have pri­ors be­cause we’re not Bayesi­ans, but we can be ap­prox­i­mated as Bayesi­ans, and those Bayesi­ans have pri­ors.) If we were cre­ated by a ra­tio­nal cre­ator, then we would have pre-ra­tio­nal pri­ors. (Which, since we don’t ac­tu­ally have pre-ra­tio­nal pri­ors, seems to be a good ar­gu­ment against us hav­ing been cre­ated by a ra­tio­nal cre­ator. I won­der what Au­mann would say about this?) But we have other grounds for be­liev­ing that we were in­stead cre­ated by evolu­tion, which is not a ra­tio­nal pro­cess, in which case the con­cept doesn’t help to an­swer the ques­tion, as far as I can see. (Robin never claimed that it would, of course.)

The next ques­tion I want to con­sider is a nor­ma­tive one: is pre-ra­tio­nal­ity ra­tio­nal? Pre-ra­tio­nal­ity says that we should rea­son as if we were pre-agents who learned about our prior as­sign­ments as in­for­ma­tion, in­stead of just tak­ing those pri­ors as given. But then, shouldn’t we also act as if we were pre-agents who learned about our util­ity func­tion as­sign­ments as in­for­ma­tion, in­stead of tak­ing them as given? In that case, we’re led to the con­clu­sion that we should all have com­mon util­ity func­tions, or at least that pre-ra­tio­nal agents should have val­ues that are much less idiosyn­cratic than ours. This seems to be a re­duc­tio ad ab­sur­dum of pre-ra­tio­nal­ity, un­less there is an ar­gu­ment why we should ap­ply the con­cept of pre-ra­tio­nal­ity only to our pri­ors, and not to our util­ity func­tions. Or is any­one tempted to bite this bul­let and claim that we should ap­ply pre-ra­tio­nal­ity to our util­ity func­tions as well? (Note that if we were cre­ated by a ra­tio­nal cre­ator, then we would have com­mon util­ity func­tions.)

The last ques­tion I want to ad­dress is one that I already raised in my pre­vi­ous post. As­sum­ing that we do want to be pre-ra­tio­nal, how do we move from our cur­rent non-pre-ra­tio­nal state to a pre-ra­tio­nal one? This is some­what similar to the ques­tion of how do we move from our cur­rent non-ra­tio­nal (ac­cord­ing to or­di­nary ra­tio­nal­ity) state to a ra­tio­nal one. Ex­pected util­ity the­ory says that we should act as if we are max­i­miz­ing ex­pected util­ity, but it doesn’t say what we should do if we find our­selves lack­ing a prior and a util­ity func­tion (i.e., if our ac­tual prefer­ences can­not be rep­re­sented as max­i­miz­ing ex­pected util­ity).

The fact that we don’t have good an­swers for these ques­tions per­haps shouldn’t be con­sid­ered fatal to pre-ra­tio­nal­ity and ra­tio­nal­ity, but it’s trou­bling that lit­tle at­ten­tion has been paid to them, rel­a­tive to defin­ing pre-ra­tio­nal­ity and ra­tio­nal­ity. (Why are ra­tio­nal­ity re­searchers more in­ter­ested in know­ing what ra­tio­nal­ity is, and less in­ter­ested in know­ing how to be ra­tio­nal? Also, BTW, why are there so few ra­tio­nal­ity re­searchers? Why aren’t there hordes of peo­ple in­ter­ested in these is­sues?)

As I men­tioned in the pre­vi­ous post, I have an idea here, which is to ap­ply some con­cepts re­lated to UDT, in par­tic­u­lar Nesov’s trad­ing across pos­si­ble wor­lds idea. As I see it now, pre-ra­tio­nal­ity is mostly about the (alleged) ir­ra­tional­ity of dis­agree­ments be­tween coun­ter­fac­tual ver­sions of the same agent, when those dis­agree­ments are caused by ir­rele­vant his­tor­i­cal ac­ci­dents such as the ran­dom as­sort­ment of genes. But how can such agents reach an agree­ment re­gard­ing what their be­liefs should be, when they can’t com­mu­ni­cate with each other and co­or­di­nate phys­i­cally? Well, at least in some cases, they may be able to co­or­di­nate log­i­cally. In my ex­am­ple of an AI whose prior was picked by the flip of a coin, the two coun­ter­fac­tual ver­sions of the AI are similar enough to each other and sym­met­ri­cal enough, for each to in­fer that if it were to change its prior from O or P to Q, where Q(A=heads)=0.5, the other AI would do the same, but this in­fer­ence wouldn’t be true for any Q’ != Q, due to lack of sym­me­try.

Of course, in the ac­tual UDT, such “changes of prior” do not liter­ally oc­cur, be­cause co­or­di­na­tion and co­op­er­a­tion be­tween pos­si­ble wor­lds hap­pen nat­u­rally as part of de­cid­ing acts and strate­gies, while one’s prefer­ences stay con­stant. Is that suffi­cient, or do we re­ally need to change our prefer­ences and make them pre-ra­tio­nal? I’m not sure.