By “Grain of Ignorance” I mean that the semimeasure loss is nonzero at every string, that is the conditionals of M are never a proper measure. Since this gap is not computable, it cannot be (easily) removed, though to be fair the conditional distribution is only limit computable anyway (same as the normalized M). However, it is not clear that there is any natural/forced choice of normalization, so I usually think of the set of possible normalizations as a credal set (and I mean ignorance in that sense). I will soon put an updated version of my “Value under Ignorance” paper (about this) on arXiv.
Vovk’s trick refers to predicting like the mixture—a “specialist expert” can opt out of offering a prediction by matching the Bayesian mixture’s prediction, so that its weight is not updated (assuming that it has access to the Bayesian mixture). I think the usual citation is “Prediction with Expert Evaluators Advice” (referring to section 6) which is with Chernov. I believe this was an influence on logical induction.
What do these refer to?
By “Grain of Ignorance” I mean that the semimeasure loss is nonzero at every string, that is the conditionals of M are never a proper measure. Since this gap is not computable, it cannot be (easily) removed, though to be fair the conditional distribution is only limit computable anyway (same as the normalized M). However, it is not clear that there is any natural/forced choice of normalization, so I usually think of the set of possible normalizations as a credal set (and I mean ignorance in that sense). I will soon put an updated version of my “Value under Ignorance” paper (about this) on arXiv.
Vovk’s trick refers to predicting like the mixture—a “specialist expert” can opt out of offering a prediction by matching the Bayesian mixture’s prediction, so that its weight is not updated (assuming that it has access to the Bayesian mixture). I think the usual citation is “Prediction with Expert Evaluators Advice” (referring to section 6) which is with Chernov. I believe this was an influence on logical induction.
Here’s the paper: https://arxiv.org/abs/2512.17086