Does Solomonoff always win?

Can a computable human beat a Solomonoff hyperintelligence at making predictions about an incoming sequence of bits? If the sequence is computable, he probably can’t. I’m interested in what happens when the sequence can be uncomputable. The answer depends on what you mean by “beat”.

Game 1: on each round you and Omega state your probabilities that the next bit will be 1. The logarithm of the probability you assigned to the actual outcome gets added to your score. (This setup is designed to incentivize players to report their true beliefs, see Eliezer’s technical explanation.)

Game 2: you both start with a given sum of money. On each round you’re allowed to bet some of it on 0 or 1 at 1:1 odds. You cannot go below zero. (This is the “martingale game”, for motivation see the section on “constructive martingales” in the Wikipedia article on Martin-Löf randomness.)

Game 3: on each round you call out 0 or 1 for the next bit. If you guess right, you win 1 dollar, otherwise you lose 1 dollar. Going below zero is allowed. (This simple game was suggested by Wei Dai in this thread on one-logic.)

As it turns out, in game 1 you cannot beat Omega by more than an additive constant, even if the input sequence is uncomputable and you know its definition. (I have linked before to Shane Legg’s text that can help you rederive this result.) Game 2 is a reformulation of game 1 in disguise, and you cannot beat Omega by more than a multiplicative constant. In game 3 you can beat Omega. More precisely, you can sometimes stay afloat while Omega sinks below zero at a linear rate.

Here’s how. First let’s set the input sequence to be very malevolent toward Omega: it will always say the reverse of what Omega is projected to say based on the previous bits. As for the human, all he has to do is either always say 0 or always say 1. Intuitively it seems likely that at least one of those strategies will stay afloat, because whenever one of them sinks, the other rises.

So is Solomonoff induction really the shining jewel at the end of all science and progress, or does that depend on the payoff setup? It’s not clear to me whether our own universe is computable. In the thread linked above Eliezer argued that we should be trying to approximate Solomonoff inference anyway:

If you’re dealing with non-exceptional situations—non-devilish environments—then shouldn’t a proof of epistemic error-boundedness generally carry over to a proof of decision error-boundedness? In other words, are you sure you’re not assuming that the environment is a superintelligent adversary which is strictly more superintelligent than you, which is the sort of reasoning that leads people to adopt randomized algorithms?

Eliezer’s argument sounds convincing, but to actually work it must rely on some prior over all of math, including uncomputable universes, to justify the rarity of such “devilish” or “adversarial” situations. I don’t know of any prior over all of math, and Wei’s restating of Berry’s paradox (also linked above) seems to show that inventing such a prior is futile. So we seem to lack any formal justification for adopting the Solomonoff distribution in reasoning about physics etc., unless I’m being stupid and missing something really obvious again.

(posting this to discussion because I’m no longer convinced that my mathy posts belong in the toplevel)