Hidden variables aren’t random; they are fixed, but unknown. Maybe we are using different definitions of randomness here. Yet I can’t see why you are comfortable with a hidden deterministic algorithm setting hidden variables; wouldn’t such an algorithm itself be random by your definition?
There is no point in arguing, which of the hypotheses producing the same results is “really true”. We should just pick the simplest one according to the Occam razor. But the simplest hypothesis isn’t just the one which involves less objects (like hidden variables), but rather, the one for which our theories fit with minimal stretch. If you agree with the interpretation of probabilities as a measure of uncertainty, then it’s simpler to use the fundamentally random processes interpretation which fits into this framework—the one with hidden variables.
I just don’t see any distinction between a hidden variable and a random variable. That it’s fixed has nothing to do with anything. It’s the difference between having a random number generator inside your program, or having a deterministic program which is called with a bunch of randomly generated arguments.
Either way you still have to ask the question of where the numbers are coming from, and if they are truly random. If they are the result of some simple deterministic algorithm. If we could, at least in principle, predict it with total accuracy, or if it’s impossible to predict no matter how much computational power we have.
And I do think there is a practical consequence of it. As you mention, Occam’s razor favor’s simpler hypotheses. If your hypothesis has a huge number of variables that can have arbitrary values, it has far more complexity than a hypothesis that allows for a random number generator.
Hidden variables aren’t random; they are fixed, but unknown. Maybe we are using different definitions of randomness here. Yet I can’t see why you are comfortable with a hidden deterministic algorithm setting hidden variables; wouldn’t such an algorithm itself be random by your definition?
There is no point in arguing, which of the hypotheses producing the same results is “really true”. We should just pick the simplest one according to the Occam razor. But the simplest hypothesis isn’t just the one which involves less objects (like hidden variables), but rather, the one for which our theories fit with minimal stretch. If you agree with the interpretation of probabilities as a measure of uncertainty, then it’s simpler to use the fundamentally random processes interpretation which fits into this framework—the one with hidden variables.
I just don’t see any distinction between a hidden variable and a random variable. That it’s fixed has nothing to do with anything. It’s the difference between having a random number generator inside your program, or having a deterministic program which is called with a bunch of randomly generated arguments.
Either way you still have to ask the question of where the numbers are coming from, and if they are truly random. If they are the result of some simple deterministic algorithm. If we could, at least in principle, predict it with total accuracy, or if it’s impossible to predict no matter how much computational power we have.
And I do think there is a practical consequence of it. As you mention, Occam’s razor favor’s simpler hypotheses. If your hypothesis has a huge number of variables that can have arbitrary values, it has far more complexity than a hypothesis that allows for a random number generator.