Any “fundamentally” random process can be seen as a deterministic process. Since it will have a single outcome, we can set it as the only outcome possible, and yield a fully deterministic process which is indistinguishable from the original, random, process. In other words, we can say that a fundamentally random process is a deterministic process which relies on hidden variables which are unreachable for us.
We can’t say. They are hidden; all our hypotheses about them would be unfalsifiable. Moreover, the fundamentally random and hidden variables viewpoints are indistinguishable by experiment, so choosing one is a matter of convenience, not absolute truth.
I’m not asking if the hypothesis is testable which is a different matter. Obviously it’s impossible to distinguish pseudo-randomness from randomness, if it’s done properly. But what you are suggesting is that even if it is random, it can still be thought of as a deterministic process with seemingly random but fixed hidden variables.
I’m asking how that is different than true randomness. A hidden variable in a causal graph, that itself has no cause, is for all intents and purposes “random”. In fact that’s probably how I would formally define randomness if I had to.
If some simple deterministic algorithm is setting all these hidden variables that’s a different hypothesis. But if they have no cause, and you have all these variables which can have totally arbitrary values for no reason, then that’s randomness.
I don’t really think it matters which is why I don’t care that it’s a testable hypothesis. But for some people like OP believe it’s really important which is how this issue came up.
Hidden variables aren’t random; they are fixed, but unknown. Maybe we are using different definitions of randomness here. Yet I can’t see why you are comfortable with a hidden deterministic algorithm setting hidden variables; wouldn’t such an algorithm itself be random by your definition?
There is no point in arguing, which of the hypotheses producing the same results is “really true”. We should just pick the simplest one according to the Occam razor. But the simplest hypothesis isn’t just the one which involves less objects (like hidden variables), but rather, the one for which our theories fit with minimal stretch. If you agree with the interpretation of probabilities as a measure of uncertainty, then it’s simpler to use the fundamentally random processes interpretation which fits into this framework—the one with hidden variables.
I just don’t see any distinction between a hidden variable and a random variable. That it’s fixed has nothing to do with anything. It’s the difference between having a random number generator inside your program, or having a deterministic program which is called with a bunch of randomly generated arguments.
Either way you still have to ask the question of where the numbers are coming from, and if they are truly random. If they are the result of some simple deterministic algorithm. If we could, at least in principle, predict it with total accuracy, or if it’s impossible to predict no matter how much computational power we have.
And I do think there is a practical consequence of it. As you mention, Occam’s razor favor’s simpler hypotheses. If your hypothesis has a huge number of variables that can have arbitrary values, it has far more complexity than a hypothesis that allows for a random number generator.
Would you agree then that probability doesn’t exist because it is just the product of us not reaching those hidden variables, but if we could reach them then everything would be certain?
If so, t seems that probability, like free will and time, is also an illusion.
Quantum uncertainty and indeterminism? I’ve never heard these terms, but this weekend at Yosemite I met a guy from Sweden who had come here to get his PhD in physics, and he made some comment along the lines of the movement of waterfalls not being predictable/explainable by physics… so is a waterfall an example of quantum uncertainty or indeterminism? If not, what are some examples?
The typical examples are things like radioactive decay, although there are many others.
And, may I repeat, it is a myth that the some barrier prevents quantum indetermimism having macroscopic consequences. If it did, particle physics could not be an experimental science.
Note that fundamentally random processes viewpoint and hidden variables viewpoint are equivalent—they produce the same predictions—so choosing one is the matter of convenience.
And hidden variables viewpoint is convenient exactly because it allows to think that probabilities is in the mind, that is, probabilities are nothing but a measure of uncertainty. It eliminates the only special case—fundamentally random processes, thus allowing us to apply our uncertainty-measure concept everywhere. Fundamentally random processes are processes which rely on parameters for which we (fundamentally) can’t reduce our uncertainty, and that’s it.
Thx for the complete answer I like your thinking process!
Note that fundamentally random processes viewpoint and hidden variables viewpoint are equivalent—they produce the same predictions—so choosing one is the matter of convenience.
I agree that they are equivalent in that they denote a lack of understanding of the underlying mechanics, but in the case of randomness, even though it could be an illusion, I still subjectively (naive view) favor the existence of randomness (and probability) in the base physical mechanics because I fail to see a connection between certainty and our brain’s apparent non-bound decision making.
Nevertheless I am open to the option that physics is only deterministic and that such a process may recreate our consciousness (I have to think more about that though).
As others already mentioned, introducing fundamental randomness doesn’t help in resolving free will problem—whether or not physical processes are truly random, you have no control over them.
Which is why discussions of fundamental indetermimism in QM always involve hidden variables. Proponents of fundamental indetermimism are invoking Occams razor.
OTOH, you can never have certain evidence that a given law is deterministic, only that it holds in t99%, .or 99.9%of cases.
Any “fundamentally” random process can be seen as a deterministic process. Since it will have a single outcome, we can set it as the only outcome possible, and yield a fully deterministic process which is indistinguishable from the original, random, process. In other words, we can say that a fundamentally random process is a deterministic process which relies on hidden variables which are unreachable for us.
Yes but what determines the state of the hidden variables?
We can’t say. They are hidden; all our hypotheses about them would be unfalsifiable. Moreover, the fundamentally random and hidden variables viewpoints are indistinguishable by experiment, so choosing one is a matter of convenience, not absolute truth.
I’m not asking if the hypothesis is testable which is a different matter. Obviously it’s impossible to distinguish pseudo-randomness from randomness, if it’s done properly. But what you are suggesting is that even if it is random, it can still be thought of as a deterministic process with seemingly random but fixed hidden variables.
I’m asking how that is different than true randomness. A hidden variable in a causal graph, that itself has no cause, is for all intents and purposes “random”. In fact that’s probably how I would formally define randomness if I had to.
If some simple deterministic algorithm is setting all these hidden variables that’s a different hypothesis. But if they have no cause, and you have all these variables which can have totally arbitrary values for no reason, then that’s randomness.
I don’t really think it matters which is why I don’t care that it’s a testable hypothesis. But for some people like OP believe it’s really important which is how this issue came up.
Hidden variables aren’t random; they are fixed, but unknown. Maybe we are using different definitions of randomness here. Yet I can’t see why you are comfortable with a hidden deterministic algorithm setting hidden variables; wouldn’t such an algorithm itself be random by your definition?
There is no point in arguing, which of the hypotheses producing the same results is “really true”. We should just pick the simplest one according to the Occam razor. But the simplest hypothesis isn’t just the one which involves less objects (like hidden variables), but rather, the one for which our theories fit with minimal stretch. If you agree with the interpretation of probabilities as a measure of uncertainty, then it’s simpler to use the fundamentally random processes interpretation which fits into this framework—the one with hidden variables.
I just don’t see any distinction between a hidden variable and a random variable. That it’s fixed has nothing to do with anything. It’s the difference between having a random number generator inside your program, or having a deterministic program which is called with a bunch of randomly generated arguments.
Either way you still have to ask the question of where the numbers are coming from, and if they are truly random. If they are the result of some simple deterministic algorithm. If we could, at least in principle, predict it with total accuracy, or if it’s impossible to predict no matter how much computational power we have.
And I do think there is a practical consequence of it. As you mention, Occam’s razor favor’s simpler hypotheses. If your hypothesis has a huge number of variables that can have arbitrary values, it has far more complexity than a hypothesis that allows for a random number generator.
Would you agree then that probability doesn’t exist because it is just the product of us not reaching those hidden variables, but if we could reach them then everything would be certain?
If so, t seems that probability, like free will and time, is also an illusion.
Probability is in the mind.
Or maybe not
Quantum uncertainty and indeterminism? I’ve never heard these terms, but this weekend at Yosemite I met a guy from Sweden who had come here to get his PhD in physics, and he made some comment along the lines of the movement of waterfalls not being predictable/explainable by physics… so is a waterfall an example of quantum uncertainty or indeterminism? If not, what are some examples?
The typical examples are things like radioactive decay, although there are many others.
And, may I repeat, it is a myth that the some barrier prevents quantum indetermimism having macroscopic consequences. If it did, particle physics could not be an experimental science.
Note that fundamentally random processes viewpoint and hidden variables viewpoint are equivalent—they produce the same predictions—so choosing one is the matter of convenience.
And hidden variables viewpoint is convenient exactly because it allows to think that probabilities is in the mind, that is, probabilities are nothing but a measure of uncertainty. It eliminates the only special case—fundamentally random processes, thus allowing us to apply our uncertainty-measure concept everywhere. Fundamentally random processes are processes which rely on parameters for which we (fundamentally) can’t reduce our uncertainty, and that’s it.
So yes, I would agree.
Thx for the complete answer I like your thinking process!
I agree that they are equivalent in that they denote a lack of understanding of the underlying mechanics, but in the case of randomness, even though it could be an illusion, I still subjectively (naive view) favor the existence of randomness (and probability) in the base physical mechanics because I fail to see a connection between certainty and our brain’s apparent non-bound decision making.
Nevertheless I am open to the option that physics is only deterministic and that such a process may recreate our consciousness (I have to think more about that though).
As others already mentioned, introducing fundamental randomness doesn’t help in resolving free will problem—whether or not physical processes are truly random, you have no control over them.
You may want to read LW free will sequence.
Opinions vary. Naturalistic libertarianism is a thing.
Which is why discussions of fundamental indetermimism in QM always involve hidden variables. Proponents of fundamental indetermimism are invoking Occams razor.
OTOH, you can never have certain evidence that a given law is deterministic, only that it holds in t99%, .or 99.9%of cases.