It would be nice to know what you can get in polynomial time. Can you find ϵ-approximate fixed points in this setting in time polynomial in 1/ϵ? That would give a polynomial time algorithm with the same regret asymptotically.
(It’s normally prediction with expert advice rather than assistance. And you should probably use bigger parentheses around sums.)
In general finding ϵ-approximate fixed points will be PPAD-hard. On the other hand, it seems like we could use an online learning algorithm to get a distribution over weight vectors such that the combined expert’s expected modification to a weight vector sampled from this distribution is small, in polynomial time. I haven’t worked out the details, but some version of this might imply that the combined expert can’t expect to do better than you, if you randomly choose a weight vector from this distribution and the loss function is decided before you make this random choice.
(by the way, the same technique should work to construct a qualipolynomial-time randomized “logical inductor” satisfying a weaker property)
A very similar idea is actually introduced by Blum and Mansour in 2007 on page 1312, though they deal with linear rather than continuous transformations and so can find the fixed point easily.
This is cool!
It would be nice to know what you can get in polynomial time. Can you find ϵ-approximate fixed points in this setting in time polynomial in 1/ϵ? That would give a polynomial time algorithm with the same regret asymptotically.
(It’s normally prediction with expert advice rather than assistance. And you should probably use bigger parentheses around sums.)
In general finding ϵ-approximate fixed points will be PPAD-hard. On the other hand, it seems like we could use an online learning algorithm to get a distribution over weight vectors such that the combined expert’s expected modification to a weight vector sampled from this distribution is small, in polynomial time. I haven’t worked out the details, but some version of this might imply that the combined expert can’t expect to do better than you, if you randomly choose a weight vector from this distribution and the loss function is decided before you make this random choice.
(by the way, the same technique should work to construct a qualipolynomial-time randomized “logical inductor” satisfying a weaker property)
A very similar idea is actually introduced by Blum and Mansour in 2007 on page 1312, though they deal with linear rather than continuous transformations and so can find the fixed point easily.