I don’t see how this is any different from, say, Bayesian inference. Ultimately your inferences depend on the model being true. You might add a bunch of complications to the model in order to take into account many possibilities so that this is less of a problem, but ultimately your inferences are going to rely on what the model says and if your model isn’t (approximately) true, well you’re in trouble whether or not you’re doing Bayesian inference or NHST or anything else.
(Though I suppose you could bite the bullet and say “you’re right, Bayes’ isn’t attempting to do induction either.” That would honestly surprise me.)
Edit: This is to say that I think you (and others) have a good argument for building better models—and maybe NHST practitioners are particularly bad about this—but I’m not talking about any specific model or the details of what NHST practitioners actually do. I’m talking about the general idea of hypothesis testing.
Just to make sure we are using the same terminology, what do you mean by “model” (statistical model e.g. set of densities?) and “induction”?
By model I do mean a statistical model. I’m not being terribly precise with the term “induction” but I mean something like “drawing conclusions from observation or data.”
Ok. If a Bayesian picks among a set of models, then it is true that (s)he assumes the disjunctive model is true.. (that is the set of densities that came from either H0 or H1 or H2 or …) but I suppose any procedure for “drawing conclusions from data” must assume something like that.
I don’t think there is a substantial difference between how Bayesians and frequentists deal with induction, so in that sense I am biting the bullet you mention. The real difference is frequentists make universally quantified statements, and Bayesians make statements about functions of the posterior.