Yeah, this is right. The variable uncertainty comes in for free when doing curve fitting—close to the datapoints your models tend to agree, far away they can shoot off in different directions. So if you have a probability distribution over different models, applying the correction for the optimizer’s curse has the very sensible effect of telling you to stick close to the training data.
Yeah, this is right. The variable uncertainty comes in for free when doing curve fitting—close to the datapoints your models tend to agree, far away they can shoot off in different directions. So if you have a probability distribution over different models, applying the correction for the optimizer’s curse has the very sensible effect of telling you to stick close to the training data.
Oh, yup, makes sense thanks
np, I’m just glad someone is reading/commenting :)