I think I am giving up on correcting “google/wikipedia experts,” it’s just a waste of time, and a losing battle anyways. (I mean the GP here).
I get the impression that ML folks have to be way more careful about overfitting because their methods are
not going to find the ‘best’ fit—they’re heavily non-deterministic. This means that an overfitted model has
basically no real chance of successfully extrapolating from the training set. This is a problem that
traditional stats doesn’t have—in that case, your model will still be optimal in some appropriate sense, no
matter how low your measures of fit are.
That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.
I think I am giving up on correcting “google/wikipedia experts,” it’s just a waste of time, and a losing battle anyways. (I mean the GP here).
That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.