(The obvious way is “if EV(bet)>0, consider the bet taken.”)
I don’t think its really obvious. The (retroactively) obvious way to extend SI into a decision agent, is AIXI, which has its own optimality theorems.
What shows up in the most obvious place—pg 328 of the book—is that the mean squared error decreases faster than 1/n. And in fact the proof method isn’t strong enough to show that the absolute error decreases faster than 1/n (because the derivative of the absolute error jumps discontinuously).
The absolute error is just the square root of the squared error. If the squared error decreases faster than 1/n, then the absolute error decreases faster than 1/sqrt(n).
I don’t think its really obvious. The (retroactively) obvious way to extend SI into a decision agent, is AIXI, which has its own optimality theorems.
The absolute error is just the square root of the squared error. If the squared error decreases faster than 1/n, then the absolute error decreases faster than 1/sqrt(n).
The point being that if your expected absolute error decreases like 1/n or slower, you make an infinite number of wrong bets.
You keep framing this in terms of bets. I don’t think there is any point in continuing this discussion.