To answer both, there’s no consequence. So I choose not to invent a completely arbitrary prior.
I do enjoy fantasizing about possible measurable consequences of particular types of simulations. Perhaps if I’m interesting enough, I’ll be copied into other simulations; perhaps we can discover some artifact of variably approximate simulation when no important observer is near, etc.
A simulation hypothesis such as “our universe is a simulation” is not falsifiable even given perfect knowledge of the universe at some point in time; maybe the universe has a definite beginning and end and it’s simulated perfectly the whole way through. Therefore, I’ll use the following definition of the simulation hypothesis: “The best description of the universe as we are capable of observing it describes our observations as happening entirely within a simulation crafted by optimizing processes.”
Let’s assume for the sake of convenience that “the” priors for the laws of physics are P, and let’s call the distribution of universes that optimizing processes would simulate P’. The only necessary difference between P and P’ is that P’ is biased toward universes that are easy and/or useful to simulate. How easy a universe is to simulate in general can probably be estimated by how easy a universe is to simulate in itself. We have quantum mechanics but quantum computers have been late in coming, suggesting that our universe would be difficult to simulate. Now, as for utility, evolution optimizes for things that themselves optimize for reproduction, but it also produces optimization for pretty much random things. We can ignore the random things, and ask how useful our universe is for reproduction. I’m guessing that the universe, as it seems to involve lots of pointless computation, is not good for that.
So, given the above, I’d estimate the probability as being… oh, how does 20% sound?
Now, of course, the other thing to look for in a simulated universe is simulation artifacts: things that seem to not follow the laws of physics, and behaviors that are only approximations to how things should behave. Suffice to say, we haven’t seen any of those.
We have quantum mechanics but quantum computers have been late in coming, suggesting that our universe would be difficult to simulate.
Quantum computers are computers which use quantum superposition for parallel processing, and are not required for simulating quantum mechanics. And our “classical” computers do in fact take advantage of quantum mechanics, as classical physics does not allow for solid state transistors.
I consider any evidence that a truly random/spontaneous process occurs is evidence that the universe isn’t closed, because something is happening without an internal mechanism to arbitrate it. And here we have the 2008 Nobel prize in physics, “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics”.
QM could be interpreted in a deterministic way, but this is not a common view. I would like to learn more about it from you and others here on LW.
“Spontaneous” means that something happens without precursor; without any apparent cause. It is orthogonal in meaning to “determined”.
When you write that spontaneous symmetry breaking is deterministic, perhaps you mean that its description is analytic—wholly described by a set of deterministic mathematical equations?
Spontaneous symmetry breaking is part of stat mech. It has practically nothing to do with QM. Stat mech can be interpreted probabilistically, but it is not at all controversial to apply it to deterministic systems.
“Spontaneous” means that something happens without precursor; without any apparent cause. It is orthogonal in meaning to “determined”.
Maybe that’s a reasonable definition, but you contrasted “spontaneous” with “closed,” which is not orthogonal to “determined.”
My point was that true randomness of any kind would be evidence that a system is not closed. This might be a novel observation (I haven’t heard it before) but I think it is a logical one. It is relevant to reductionism (we wouldn’t want supernatural processes swooping down to make choices for our free processes) and whether we are in a simulation.
Spontaneous symmetry breaking is part of stat mech. [...] Stat mech can be interpreted probabilistically, but it is not at all controversial to apply it to deterministic systems.
When applied to deterministic systems, the spontaneous symmetry breaking isn’t really spontaneous, just apparently so. The idea is that the direction of breaking is determined by the initial conditions, but we may not have enough information about the initial conditions to predict it.
Spontaneous symmetry breaking is part of stat mech. It has practically nothing to do with QM.
It sounds like you want like to argue with whoever is responsible for, “spontaneous symmetry breaking in subatomic physics”. I didn’t mention QM apart from that.
All of your examples count as random events with a collapse postulate, but not with many worlds, and hidden-variables have been formulated both ways.
Based on your past comments, I assume you already know that. Still, since your examples don’t suffice to distinguish interpretations of QM, they also don’t suffice to distinguish a universe with randomness from one without. Or are you just pointing out that we should assign higher probability to randomness than we would have if we hadn’t observed anything that looked like collapse?
I’m not much interested in creating bogus/useless “probability estimates”. The simulation hypothesis I rate, as I do religion, as “false, barring further evidence”. Evidence that the simulation hypothesis is true could be a “physically impossible” inconsistency, like in Heinlein’s story “Them”. If I became convinced that this was a simulation, I’d become a complete hedonist, why bother with anything else when you are completely under the thumb of whatever’s running the simulation.
Stuff with which you interact is part of the rules of the game applied to you. The more generally applicable of these rules you call “physical laws”. Those are the rules that can’t be helped. If you are in the domain of a singleton, then its preference is one of the inescapable laws.
You can analyze the raw content of the laws applied to you, just as you can analyze sensory input, and see patterns such as individual agents making decisions that affect your condition. Maybe such patterns are there, maybe they are not, but the judgment of what to do under the given rules must depend on what exactly those patterns are, not just a fact of “their existence”.
That’s interesting: Vladimir_Nesov chastised me for exactly the same thing a month ago. While I probably do reify physical laws, I’m sure he was just applying a metaphor.
My point in both cases is more that the concept of “existence” is very low on meaningfulness, that you shouldn’t act on mere “existence” of “nonexistence” of something, you must instead understand what that something is.
I’m curious about how Less Wrong readers would answer these questions:
What is your probability estimate for some form of the simulation hypothesis being true?
If you received evidence that changed your estimate to be much higher (or lower), what would you do differently in your life?
To answer both, there’s no consequence. So I choose not to invent a completely arbitrary prior.
I do enjoy fantasizing about possible measurable consequences of particular types of simulations. Perhaps if I’m interesting enough, I’ll be copied into other simulations; perhaps we can discover some artifact of variably approximate simulation when no important observer is near, etc.
A simulation hypothesis such as “our universe is a simulation” is not falsifiable even given perfect knowledge of the universe at some point in time; maybe the universe has a definite beginning and end and it’s simulated perfectly the whole way through. Therefore, I’ll use the following definition of the simulation hypothesis: “The best description of the universe as we are capable of observing it describes our observations as happening entirely within a simulation crafted by optimizing processes.”
Let’s assume for the sake of convenience that “the” priors for the laws of physics are P, and let’s call the distribution of universes that optimizing processes would simulate P’. The only necessary difference between P and P’ is that P’ is biased toward universes that are easy and/or useful to simulate. How easy a universe is to simulate in general can probably be estimated by how easy a universe is to simulate in itself. We have quantum mechanics but quantum computers have been late in coming, suggesting that our universe would be difficult to simulate. Now, as for utility, evolution optimizes for things that themselves optimize for reproduction, but it also produces optimization for pretty much random things. We can ignore the random things, and ask how useful our universe is for reproduction. I’m guessing that the universe, as it seems to involve lots of pointless computation, is not good for that.
So, given the above, I’d estimate the probability as being… oh, how does 20% sound?
Now, of course, the other thing to look for in a simulated universe is simulation artifacts: things that seem to not follow the laws of physics, and behaviors that are only approximations to how things should behave. Suffice to say, we haven’t seen any of those.
Quantum computers are computers which use quantum superposition for parallel processing, and are not required for simulating quantum mechanics. And our “classical” computers do in fact take advantage of quantum mechanics, as classical physics does not allow for solid state transistors.
I don’t understand what point you’re trying to make here, but classical physics allows for mechanical computers.
It seems that quantum computers are required for simulating quantum mechanics in sub-exponential time, though.
When discussing asymptotic algorithmic complexity, you should specify the varying parameter of problem complexity.
The usual default parameter is number of bits it takes to write down the problem. It could also be number of particles. Either one works in this case.
What quantum algorithm for simulating quantum mechanics takes sub-exponential time with respect to the number of particles?
I didn’t have a particular algorithm in mind when I said that, but since you ask I went and found this one.
I consider any evidence that a truly random/spontaneous process occurs is evidence that the universe isn’t closed, because something is happening without an internal mechanism to arbitrate it. And here we have the 2008 Nobel prize in physics, “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics”.
I do not think that phrase means what you think it means.
I thought it meant what you linked to, and after checking I’m pretty sure that was what the prize was about.
So what do you think about the possibility of a physical mechanism being able to make a free choice?
Perhaps some better examples:
spontaneous creation of particles in a vacuum
spontaneous particle decay
QM is deterministic. Spontaneous symmetry breaking also occurs in stat mech, which applies to deterministic classical systems.
QM could be interpreted in a deterministic way, but this is not a common view. I would like to learn more about it from you and others here on LW.
“Spontaneous” means that something happens without precursor; without any apparent cause. It is orthogonal in meaning to “determined”.
When you write that spontaneous symmetry breaking is deterministic, perhaps you mean that its description is analytic—wholly described by a set of deterministic mathematical equations?
Spontaneous symmetry breaking is part of stat mech. It has practically nothing to do with QM. Stat mech can be interpreted probabilistically, but it is not at all controversial to apply it to deterministic systems.
Maybe that’s a reasonable definition, but you contrasted “spontaneous” with “closed,” which is not orthogonal to “determined.”
My point was that true randomness of any kind would be evidence that a system is not closed. This might be a novel observation (I haven’t heard it before) but I think it is a logical one. It is relevant to reductionism (we wouldn’t want supernatural processes swooping down to make choices for our free processes) and whether we are in a simulation.
When applied to deterministic systems, the spontaneous symmetry breaking isn’t really spontaneous, just apparently so. The idea is that the direction of breaking is determined by the initial conditions, but we may not have enough information about the initial conditions to predict it.
It sounds like you want like to argue with whoever is responsible for, “spontaneous symmetry breaking in subatomic physics”. I didn’t mention QM apart from that.
All of your examples count as random events with a collapse postulate, but not with many worlds, and hidden-variables have been formulated both ways.
Based on your past comments, I assume you already know that. Still, since your examples don’t suffice to distinguish interpretations of QM, they also don’t suffice to distinguish a universe with randomness from one without. Or are you just pointing out that we should assign higher probability to randomness than we would have if we hadn’t observed anything that looked like collapse?
I’m not much interested in creating bogus/useless “probability estimates”. The simulation hypothesis I rate, as I do religion, as “false, barring further evidence”. Evidence that the simulation hypothesis is true could be a “physically impossible” inconsistency, like in Heinlein’s story “Them”. If I became convinced that this was a simulation, I’d become a complete hedonist, why bother with anything else when you are completely under the thumb of whatever’s running the simulation.
You are completely under the thumb of the physical laws.
Sorry, but that is a reification of “physical laws”; physical laws aren’t a thing, they are simply our description of “how things work”.
Stuff with which you interact is part of the rules of the game applied to you. The more generally applicable of these rules you call “physical laws”. Those are the rules that can’t be helped. If you are in the domain of a singleton, then its preference is one of the inescapable laws.
You can analyze the raw content of the laws applied to you, just as you can analyze sensory input, and see patterns such as individual agents making decisions that affect your condition. Maybe such patterns are there, maybe they are not, but the judgment of what to do under the given rules must depend on what exactly those patterns are, not just a fact of “their existence”.
That’s interesting: Vladimir_Nesov chastised me for exactly the same thing a month ago. While I probably do reify physical laws, I’m sure he was just applying a metaphor.
My point in both cases is more that the concept of “existence” is very low on meaningfulness, that you shouldn’t act on mere “existence” of “nonexistence” of something, you must instead understand what that something is.