When does ra­tion­al­ity-as-search have non­trivial im­plic­a­tions?

(This ori­gin­ated as a com­ment on the post “Embed­ded World-Models,” but it makes a broadly ap­plic­able point and is sub­stan­tial enough to be a post, so I thought I’d make it a post as well.)


This post feels quite sim­ilar to things I have writ­ten in the past to jus­tify my lack of en­thu­si­asm about ideal­iz­a­tions like AIXI and lo­gic­ally-om­ni­scient Bayes. But I would go fur­ther: I think that grap­pling with em­bed­ded­ness prop­erly will in­ev­it­ably make the­or­ies of this gen­eral type ir­rel­ev­ant or use­less, so that “a the­ory like this, ex­cept for em­bed­ded agents” is not a thing that we can reas­on­ably want. To spe­cify what I mean, I’ll use this para­graph as a jump­ing-off point:

Embed­ded agents don’t have the lux­ury of step­ping out­side of the uni­verse to think about how to think. What we would like would be a the­ory of ra­tional be­lief for situ­ated agents which provides found­a­tions that are sim­il­arly as strong as the found­a­tions Bayesian­ism provides for du­al­istic agents.

Most “the­or­ies of ra­tional be­lief” I have en­countered—in­clud­ing Bayesian­ism in the sense I think is meant here—are framed at the level of an eval­u­ator out­side the uni­verse, and have es­sen­tially no con­tent when we try to trans­fer them to in­di­vidual em­bed­ded agents. This is be­cause these the­or­ies tend to be de­rived in the fol­low­ing way:

  • We want a the­ory of the best pos­sible be­ha­vior for agents.

  • We have some class of “prac­tic­ally achiev­able” strategies , which can ac­tu­ally be im­ple­men­ted by agents. We note that an agent’s ob­ser­va­tions provide some in­form­a­tion about the qual­ity of dif­fer­ent strategies . So if it were pos­sible to fol­low a rule like “find the best given your ob­ser­va­tions, and then fol­low that ,” this rule would spit out very good agent be­ha­vior.

  • Usu­ally we soften this to a per­form­ance-weighted av­er­age rather than a hard argmax, but the prin­ciple is the same: if we could search over all of , the rule that says “do the search and then fol­low what it says” can be com­pet­it­ive with the very best . (Trivi­ally so, since it has ac­cess to the best strategies, along with all the oth­ers.)

  • But usu­ally . That is, the strategy “search over all prac­tical strategies and fol­low the best ones” is not a prac­tical strategy. But we ar­gue that this is fine, since we are con­struct­ing a the­ory of ideal be­ha­vior. It doesn’t have to be prac­tic­ally im­ple­ment­able.

For ex­ample, in So­lomonoff, is defined by com­put­ab­il­ity while is al­lowed to be un­com­put­able. In the LIA con­struc­tion, is defined by poly­time com­plex­ity while is al­lowed to run slower than poly­time. In lo­gic­ally-om­ni­scient Bayes, fi­nite sets of hy­po­theses can be ma­nip­u­lated in a fi­nite uni­verse but the full Boolean al­gebra over hy­po­theses gen­er­ally can­not (N.B. I don’t think this last case fits my schema quite as well as the other two).

I hope the frame­work I’ve just in­tro­duced helps cla­rify what I find un­prom­ising about these the­or­ies. By con­struc­tion, any agent you can ac­tu­ally design and run is a single ele­ment of (a “prac­tical strategy”), so every fact about ra­tion­al­ity that can be in­cor­por­ated into agent design gets “hid­den in­side” the in­di­vidual , and the only things you can learn from the “ideal the­ory” are things which can’t fit into a prac­tical strategy.

For ex­ample, sup­pose (reas­on­ably) that model av­er­aging and com­plex­ity pen­al­ties are broadly good ideas that lead to good res­ults. But all of the model av­er­aging and com­plex­ity pen­al­iz­a­tion that can be done com­put­ably hap­pens in­side some Tur­ing ma­chine or other, at the level “be­low” So­lomonoff. Thus So­lomonoff only tells you about the ex­tra ad­vant­age you can get by do­ing these things un­com­put­ably. Any kind of nice Bayesian av­er­age over Tur­ing ma­chines that can hap­pen com­put­ably is (of course) just an­other Tur­ing ma­chine.

This also ex­plains why I find it mis­lead­ing to say that good prac­tical strategies con­sti­tute “ap­prox­im­a­tions to” an ideal the­ory of this type. Of course, since just says to fol­low the best strategies in , if you are fol­low­ing a very good strategy in your be­ha­vior will tend to be close to that of . But this can­not be at­trib­uted to any of the search­ing over that does, since you are not do­ing a search over ; you are ex­ecut­ing a single mem­ber of and ig­nor­ing the oth­ers. Any search­ing that can be done prac­tic­ally col­lapses down to a single prac­tical strategy, and any that doesn’t is not prac­tical.

Con­cretely, this talk of ap­prox­im­a­tions is like say­ing that a very suc­cess­ful chess player “ap­prox­im­ates” the rule “con­sult all pos­sible chess play­ers, then weight their moves by past per­form­ance.” Yes, the skilled player will play sim­il­arly to this rule, but they are not fol­low­ing it, not even ap­prox­im­ately! They are only them­selves, not any other player.

Any the­ory of ideal ra­tion­al­ity that wants to be a guide for em­bed­ded agents will have to be con­strained in the same ways the agents are. But the­or­ies of ideal ra­tion­al­ity usu­ally get all of their con­tent by go­ing to a level above the agents they judge. So this new the­ory would have to be a very dif­fer­ent sort of thing.


To state all this more pith­ily: if we design the search space to con­tains everything feas­ible, then ra­tion­al­ity-as-search has no feas­ible im­plic­a­tions. If ra­tion­al­ity-as-search is to have feas­ible im­plic­a­tions, then the search space must be weak enough for there to be some­thing feas­ible that is not a point in the search space.