Open Problems Related to Solomonoff Induction

Solomonoff In­duc­tion seems clearly “on the right track”, but there are a num­ber of prob­lems with it that I’ve been puz­zling over for sev­eral years and have not made much progress on. I think I’ve talked about all of them in var­i­ous com­ments in the past, but never col­lected them in one place.

Ap­par­ent Un­for­mal­iz­abil­ity of “Ac­tual” Induction

Ar­gu­ment via Tarski’s In­defin­abil­ity of Truth

In­for­mally, the the­o­rem states that ar­ith­meti­cal truth can­not be defined in ar­ith­metic. The the­o­rem ap­plies more gen­er­ally to any suffi­ciently strong for­mal sys­tem, show­ing that truth in the stan­dard model of the sys­tem can­not be defined within the sys­tem.

Sup­pose we define a gen­er­al­ized ver­sion of Solomonoff In­duc­tion based on some sec­ond-or­der logic. The truth pred­i­cate for this logic can’t be defined within the logic and there­fore a de­vice that can de­cide the truth value of ar­bi­trary state­ments in this log­i­cal has no finite de­scrip­tion within this logic. If an alien claimed to have such a de­vice, this gen­er­al­ized Solomonoff in­duc­tion would as­sign the hy­poth­e­sis that they’re tel­ling the truth zero prob­a­bil­ity, whereas we would as­sign it some small but pos­i­tive prob­a­bil­ity.

Ar­gu­ment via Berry’s Paradox

Con­sider an ar­bi­trary prob­a­bil­ity dis­tri­bu­tion P, and the small­est in­te­ger (or the lex­i­co­graph­i­cally least ob­ject) x such that P(x) < 1/​3^^^3 (in Knuth’s up-ar­row no­ta­tion). Since x has a short de­scrip­tion, a uni­ver­sal dis­tri­bu­tion shouldn’t as­sign it such a low prob­a­bil­ity, but P does, so P can’t be a uni­ver­sal dis­tri­bu­tion.

Is Solomonoff In­duc­tion “good enough”?

Given the above, is Solomonoff In­duc­tion nev­er­the­less “good enough” for prac­ti­cal pur­poses? In other words, would an AI pro­grammed to ap­prox­i­mate Solomonoff In­duc­tion do as well as any other pos­si­ble agent we might build, even though it wouldn’t have what we’d con­sider cor­rect be­liefs?

Is com­plex­ity ob­jec­tive?

Solomonoff In­duc­tion is sup­posed to be a for­mal­iza­tion of Oc­cam’s Ra­zor, and it’s con­fus­ing that the for­mal­iza­tion has a free pa­ram­e­ter in the form of a uni­ver­sal Tur­ing ma­chine that is used to define the no­tion of com­plex­ity. What’s the sig­nifi­cance of the fact that we can’t seem to define a pa­ram­e­ter­less con­cept of com­plex­ity? That com­plex­ity is sub­jec­tive?

Is Solomonoff an ideal or an ap­prox­i­ma­tion?

Is it the case that the uni­ver­sal prior (or some suit­able gen­er­al­iza­tion of it that some­how over­comes the above “un­for­mal­iz­abil­ity prob­lems”) is the “true” prior and that Solomonoff In­duc­tion rep­re­sents ideal­ized rea­son­ing, or does Solomonoff just “work well enough” (in some sense) at ap­prox­i­mat­ing any ra­tio­nal agent?

How can we ap­ply Solomonoff when our in­puts are not sym­bol strings?

Solomonoff In­duc­tion is defined over sym­bol strings (for ex­am­ple bit strings) but our per­cep­tions are made of “qualia” in­stead of sym­bols. How is Solomonoff In­duc­tion sup­posed to work for us?

What does Solomonoff In­duc­tion ac­tu­ally say?

What does Solomonoff In­duc­tion ac­tu­ally say about, for ex­am­ple, whether we live in a cre­ator­less uni­verse that runs on physics? Or the Si­mu­la­tion Ar­gu­ment?