The usual plain-vanilla way is to use out-of-sample testing—check the model on data that neither the model nor the researchers have seen before. It’s common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.
In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.
The characteristics of the model’s fit are not necessarily a good guide to the model’s predictive capabilities. Overfitting is still depressingly common.
The usual plain-vanilla way is to use out-of-sample testing—check the model on data that neither the model nor the researchers have seen before. It’s common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.
In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.
The characteristics of the model’s fit are not necessarily a good guide to the model’s predictive capabilities. Overfitting is still depressingly common.