Yes, when trying to reuse the OP’s phrasing, maybe I wasn’t specific enough on what I meant. I wanted to highlight how the “fraction of variance explained” metric generalized less that other outputs from the same model.
For example, if you conceive a case where a model of E[y] vs. x provides good out-of-sample predictions even if the distribution of x changes, e.g. because x stays in the range used to fit the model, the fraction of variance explained is nevertheless sensitive to the distribution of x. Of course, you can have a confounder w that makes y(x) less accurate out-of-sample because its distribution changes and indirectly “breaks” the learned y(x) relationship, but then, w would influence the fraction of variance explained even if it’s not a confounder, even if it doesn’t break the validity of y(x).
Or for a more concrete example, maybe some nutrients (e.g. Vitamin C) are not as predictive of individual health as they were in the past, because most people just have enough of them in their diet, but fundamentally the relationship between those nutrients and health hasn’t changed, just the distribution; our model of that relationship is probably still good. This is a very simple example. Still, I think in general there is a lot of potential misinterpretation of this metric (not necessarily on this forum, but in public discourse broadly), especially as it is sometimes called a measure of variable importance. When I read the first part of this post about teachers from Scott Alexander: https://www.lesswrong.com/posts/K9aLcuxAPyf5jGyFX/teachers-much-more-than-you-wanted-to-know , I can’t conclude from “having different teachers explains 10% of the variance in test scores” that teaching quality doesn’t have much impact on the outcome. (And in fact, as a parent I would value teaching quality, but not a high variance in teaching quality within the school district. I wouldn’t want my kids learning of core topics to be strongly dependent of which school or which class in that school they are attending.)
Yes, when trying to reuse the OP’s phrasing, maybe I wasn’t specific enough on what I meant. I wanted to highlight how the “fraction of variance explained” metric generalized less that other outputs from the same model.
For example, if you conceive a case where a model of E[y] vs. x provides good out-of-sample predictions even if the distribution of x changes, e.g. because x stays in the range used to fit the model, the fraction of variance explained is nevertheless sensitive to the distribution of x. Of course, you can have a confounder w that makes y(x) less accurate out-of-sample because its distribution changes and indirectly “breaks” the learned y(x) relationship, but then, w would influence the fraction of variance explained even if it’s not a confounder, even if it doesn’t break the validity of y(x).
Or for a more concrete example, maybe some nutrients (e.g. Vitamin C) are not as predictive of individual health as they were in the past, because most people just have enough of them in their diet, but fundamentally the relationship between those nutrients and health hasn’t changed, just the distribution; our model of that relationship is probably still good. This is a very simple example. Still, I think in general there is a lot of potential misinterpretation of this metric (not necessarily on this forum, but in public discourse broadly), especially as it is sometimes called a measure of variable importance. When I read the first part of this post about teachers from Scott Alexander: https://www.lesswrong.com/posts/K9aLcuxAPyf5jGyFX/teachers-much-more-than-you-wanted-to-know , I can’t conclude from “having different teachers explains 10% of the variance in test scores” that teaching quality doesn’t have much impact on the outcome. (And in fact, as a parent I would value teaching quality, but not a high variance in teaching quality within the school district. I wouldn’t want my kids learning of core topics to be strongly dependent of which school or which class in that school they are attending.)