Thanks for reminding me about V-information. I am not sure how much I like this particular definition yet—but this direction of inquiry seems very important imho.
I like the definition, it’s the minimum expected code length for a distribution under constraints on the code (namely, constraints on the kind of beliefs you’re allowed to have—after having that belief, the optimal code is as always the negative log prob).
Also the examples in Proposition 1 were pretty cool in that it gave new characterizations of some well-known quantities—log determinant of the covariance matrix does indeed intuitively measure the uncertainty of a random variable, but it is very cool to see that it in fact has entropy interpretations!
It’s kinda sad because after a brief search it seems like none of the original authors are interested in extending this framework.
Thanks for reminding me about V-information. I am not sure how much I like this particular definition yet—but this direction of inquiry seems very important imho.
I like the definition, it’s the minimum expected code length for a distribution under constraints on the code (namely, constraints on the kind of beliefs you’re allowed to have—after having that belief, the optimal code is as always the negative log prob).
Also the examples in Proposition 1 were pretty cool in that it gave new characterizations of some well-known quantities—log determinant of the covariance matrix does indeed intuitively measure the uncertainty of a random variable, but it is very cool to see that it in fact has entropy interpretations!
It’s kinda sad because after a brief search it seems like none of the original authors are interested in extending this framework.