Consider all the possible outcomes of the races. Any algorithm will be
right half the time (on average for the non-deterministic ones), on any
subset of those races algorithms (other than random guessing) some
algorithms will do better than others. We’re looking for algorithms that do well in the subsets that match up to reality.
The more randomness in an algorithm, the less the algorithm varies across those subsets. By doing better in subsets that don’t match reality the weighted maximum algorithm does worse in the subsets that do, which are the ones we care about. There are algorithms that does better in reality, and they have less randomness. (Now if none can be reduced from giant lookup tables, that’d be interesting...)
But the point of the randomized weighted majority guarantee is that it holds (up to the correctness of the random number generator) regardless of how much more complicated reality may be than the experts’ models.
How often are the models both perfectly contradictory and equal to chance? How often is reality custom tailored to make the algorithm fail?
Those are the cases you’re protecting against, no? I imagine there are more effective ways.
Consider all the possible outcomes of the races. Any algorithm will be right half the time (on average for the non-deterministic ones), on any subset of those races algorithms (other than random guessing) some algorithms will do better than others. We’re looking for algorithms that do well in the subsets that match up to reality.
The more randomness in an algorithm, the less the algorithm varies across those subsets. By doing better in subsets that don’t match reality the weighted maximum algorithm does worse in the subsets that do, which are the ones we care about. There are algorithms that does better in reality, and they have less randomness. (Now if none can be reduced from giant lookup tables, that’d be interesting...)
How often are the models both perfectly contradictory and equal to chance? How often is reality custom tailored to make the algorithm fail?
Those are the cases you’re protecting against, no? I imagine there are more effective ways.