Extraordinary affirmations need extraordinary evidence. I don’t think that the burden of proof is wrong. It is reasonable to expect anyone who makes a positive claim to prove it. To dismiss this principle when it goes against our view is a double standard.
To be honest, my impression is that we rationalists were very happy with this principle when Dawkins used it against the God hypothesis in The God Delusion, but now we or some of us are less comfortable with it when it is opposed to the simulation hypothesis (despite near-perfect isomorphism, as Chalmers himself shows).
Why ? Because techno-theology has an appealing technology vibe and is based upon anthropic arguments, the kind of arguments that are also discussed in cosmology. However, anthropics in cosmology make some predictions, like constraining the cosmological constant/dark energy.
I acknowledge that Bostrom’s and Chalmers’s anthropic arguments make sense. The hypothesis is intriguing. But we mustn’t adopt a view just because it is intellectually appealing, that’s a bias. We must adopt it if it is true, and the impossibility of checking whether it is true or not is a deep flaw. The burden of proof applies. The simulation hypothesis is a seductive cosmic teapot, but it’s still a cosmic teapot.
That said, I agree that we cannot know for sure. It’s always a Bayesian weighting, and by no means was my point to negate the simulation hypothesis with absolute confidence. Sorry if I gave that impression. I rank it as more probable than traditional theology and less probable than non-simulated reality.
Concerning Occam’s razor, of course parsimony applies to the description, not to the sequence/output. I didn’t mean it otherwise. My argument concerned the simulation process. It doesn’t seem parsimonious to add one, or several, or an infinity of computational layers on top of the process generating our world.
It looks like common sense, but I must admit that if we enter the theoretical details (which I do not master) it is less straightforward. The devil is in the details. One can cheat by designing an ad hoc UTM to arrive at a weird result. The theorems of equivalence between UTMs are always on average and up to a constant. However, we can restore the common-sense view that there is no logical free lunch by assessing the overall computational resources, not only pure K-description but also logical depth, speed prior, or Levin’s complexity. A description must be preferred over another one, all else being equal, if it is more parsimonious both in terms of information and computation. It’s true that Occam’s razor is not always interpreted like this, but in my opinion it should be.
Also, Occam’s razor makes sense only as long as the assessed theory has predictive power. A theory that predicts everything, in fact predicts nothing. In AIT, it would be a description that doesn’t just produce the sequence of our universe, but the sequence of many possible universes. String theory faces this problem, and so does the simulation hypothesis because we don’t know in which universe we end up.
I also think that the matter has little to do with black holes, which are a prediction of GR formally derived from the beginning (Schwarzschild singularity) and are now well observed, even if discussion continues concerning the physics at the horizon and inside.
I fully agree that “nothing weird must happen” is a biased presumption, but I doubt that a perfect simulation of our observable universe constitutes a straightforward prediction of the evolution of economy and technology. I expect increasingly better VR than today, but there are computational costs and physical limits.
First, you need statistically about 1 m³ of universe to perfectly simulate 1 m³ of another similar universe, all else being equal. By “perfectly” I mean 1 bit : 1 bit. No free lunch. If you start cheating by running a slow simulation, you will eventually run out of time to complete the simulation because of expansion.
Second, something that is often overlooked: GR forbids you, in practice, from building an arbitrarily large computer. More is different. Your computer is subject to c : a larger computer is a slower computer. Your computer is also made of matter, so the larger it is, the less dense it must be to avoid to collapse into a black hole. But even if its energy density isn’t critical, the only way to resist gravitation is radiation. Because you can’t maintain thermal equilibrium in an expanding and cooling universe, the only solution is to generate radiation like stars do (for a certain time). The only stable large-scale organization of matter in the observable universe is precisely what we observe : stars, galaxies, and gravitational filaments. In the end, a cosmic-scale computer would be made of normal stars and galaxies, except that I doubt that normal stars and galaxies compute the simulation of other universes, it seems to me that they compute their own dynamics in our universe.
Third, you can circumvent these fundamental problems by discarding the requirement of a lossless or bit-perfect simulation, like current VR. However, a lossy simulation, even better than current VR, would diverge in the details from a non-simulated universe or a lossless simulation. The good point is that it becomes an empirically falsifiable theory. The bad point is that it seems to be an already empirically falsified theory because scientific experiments merely support the view that we live in a world where information is conserved. It doesn’t seem that our physical laws are lossy (leaving aside the problem concerning black hole horizons). Imperfect simulation also opens the door to an agentic version of God/the simulator, which goes even more against Occam’s razor, and observations do not support that either (except if we believe in supernatural events).
Finally, if all that doesn’t suffice, there is still the paradox I mentioned in my last comment : infinite regress. Dawkins put that paradox forward in his rebuttal of theism. The same argument applies to the simulation hypothesis. There are as many reasons for the simulator than for us to be simulated, that’s circular thinking.
I respect the idea, but I don’t buy it and assign it a low probability.
Extraordinary affirmations need extraordinary evidence
You’re not gonna like this, but that’s another one that’s not actually true at all. Extraordinary theories have often been proven with mundane evidence, evidence that had been sitting around in front of our faces for decades or centuries, the new theory, the extraordinary claim, only became apparent after this mundane evidence was subjected to original kinds of analysis, new arguments, arguably complicated arguments. Although new evidence was usually gathered to test the theory, it wasn’t strictly needed. If it had been impossible to go out into the world and subject the theory to new tests (as it is for the simulation hypothesis), the truth of the theory still would have become obvious. Examples of such theories include plate tectonics, heliocentrism, and evolution.
To be honest, my impression is that we rationalists were very happy with this principle when Dawkins used it against the God hypothesis in The God Delusion
I was very happy with it back then, because I was just a kid. I hadn’t learned how scientific thinking (actual, not performative) really ought to work. I trusted the accounts of those who were busy doing science, not realising that using a particular frame doesn’t always equip a person to question the frame or to develop better frames when that one starts to reach its limits.
not only pure K-description but also logical depth, speed prior, or Levin’s complexity
Does this universe really look to you like it conforms to a speed prior? This universe doesn’t care at all about runtime. (it can indeed only be simulated very lossily) (one of the only objections I still take seriously, is that subjects within a lossy simulation of a universe, optimised for answering certain questions which don’t closely concern the minutia of their thoughts, might have far lower experiential-measure than actual physical people, so perhaps although they are far more numerous than natural people, it may still work out to be unlikely to be one of them.)
I could say more about the rest of that, but it doesn’t really matter whether we believe the simulation hypothesis today.
Extraordinary affirmations need extraordinary evidence. I don’t think that the burden of proof is wrong. It is reasonable to expect anyone who makes a positive claim to prove it. To dismiss this principle when it goes against our view is a double standard.
To be honest, my impression is that we rationalists were very happy with this principle when Dawkins used it against the God hypothesis in The God Delusion, but now we or some of us are less comfortable with it when it is opposed to the simulation hypothesis (despite near-perfect isomorphism, as Chalmers himself shows).
Why ? Because techno-theology has an appealing technology vibe and is based upon anthropic arguments, the kind of arguments that are also discussed in cosmology. However, anthropics in cosmology make some predictions, like constraining the cosmological constant/dark energy.
I acknowledge that Bostrom’s and Chalmers’s anthropic arguments make sense. The hypothesis is intriguing. But we mustn’t adopt a view just because it is intellectually appealing, that’s a bias. We must adopt it if it is true, and the impossibility of checking whether it is true or not is a deep flaw. The burden of proof applies. The simulation hypothesis is a seductive cosmic teapot, but it’s still a cosmic teapot.
That said, I agree that we cannot know for sure. It’s always a Bayesian weighting, and by no means was my point to negate the simulation hypothesis with absolute confidence. Sorry if I gave that impression. I rank it as more probable than traditional theology and less probable than non-simulated reality.
Concerning Occam’s razor, of course parsimony applies to the description, not to the sequence/output. I didn’t mean it otherwise. My argument concerned the simulation process. It doesn’t seem parsimonious to add one, or several, or an infinity of computational layers on top of the process generating our world.
It looks like common sense, but I must admit that if we enter the theoretical details (which I do not master) it is less straightforward. The devil is in the details. One can cheat by designing an ad hoc UTM to arrive at a weird result. The theorems of equivalence between UTMs are always on average and up to a constant. However, we can restore the common-sense view that there is no logical free lunch by assessing the overall computational resources, not only pure K-description but also logical depth, speed prior, or Levin’s complexity. A description must be preferred over another one, all else being equal, if it is more parsimonious both in terms of information and computation. It’s true that Occam’s razor is not always interpreted like this, but in my opinion it should be.
Also, Occam’s razor makes sense only as long as the assessed theory has predictive power. A theory that predicts everything, in fact predicts nothing. In AIT, it would be a description that doesn’t just produce the sequence of our universe, but the sequence of many possible universes. String theory faces this problem, and so does the simulation hypothesis because we don’t know in which universe we end up.
I also think that the matter has little to do with black holes, which are a prediction of GR formally derived from the beginning (Schwarzschild singularity) and are now well observed, even if discussion continues concerning the physics at the horizon and inside.
I fully agree that “nothing weird must happen” is a biased presumption, but I doubt that a perfect simulation of our observable universe constitutes a straightforward prediction of the evolution of economy and technology. I expect increasingly better VR than today, but there are computational costs and physical limits.
First, you need statistically about 1 m³ of universe to perfectly simulate 1 m³ of another similar universe, all else being equal. By “perfectly” I mean 1 bit : 1 bit. No free lunch. If you start cheating by running a slow simulation, you will eventually run out of time to complete the simulation because of expansion.
Second, something that is often overlooked: GR forbids you, in practice, from building an arbitrarily large computer. More is different. Your computer is subject to c : a larger computer is a slower computer. Your computer is also made of matter, so the larger it is, the less dense it must be to avoid to collapse into a black hole. But even if its energy density isn’t critical, the only way to resist gravitation is radiation. Because you can’t maintain thermal equilibrium in an expanding and cooling universe, the only solution is to generate radiation like stars do (for a certain time). The only stable large-scale organization of matter in the observable universe is precisely what we observe : stars, galaxies, and gravitational filaments. In the end, a cosmic-scale computer would be made of normal stars and galaxies, except that I doubt that normal stars and galaxies compute the simulation of other universes, it seems to me that they compute their own dynamics in our universe.
Third, you can circumvent these fundamental problems by discarding the requirement of a lossless or bit-perfect simulation, like current VR. However, a lossy simulation, even better than current VR, would diverge in the details from a non-simulated universe or a lossless simulation. The good point is that it becomes an empirically falsifiable theory. The bad point is that it seems to be an already empirically falsified theory because scientific experiments merely support the view that we live in a world where information is conserved. It doesn’t seem that our physical laws are lossy (leaving aside the problem concerning black hole horizons). Imperfect simulation also opens the door to an agentic version of God/the simulator, which goes even more against Occam’s razor, and observations do not support that either (except if we believe in supernatural events).
Finally, if all that doesn’t suffice, there is still the paradox I mentioned in my last comment : infinite regress. Dawkins put that paradox forward in his rebuttal of theism. The same argument applies to the simulation hypothesis. There are as many reasons for the simulator than for us to be simulated, that’s circular thinking.
I respect the idea, but I don’t buy it and assign it a low probability.
You’re not gonna like this, but that’s another one that’s not actually true at all. Extraordinary theories have often been proven with mundane evidence, evidence that had been sitting around in front of our faces for decades or centuries, the new theory, the extraordinary claim, only became apparent after this mundane evidence was subjected to original kinds of analysis, new arguments, arguably complicated arguments. Although new evidence was usually gathered to test the theory, it wasn’t strictly needed. If it had been impossible to go out into the world and subject the theory to new tests (as it is for the simulation hypothesis), the truth of the theory still would have become obvious. Examples of such theories include plate tectonics, heliocentrism, and evolution.
I was very happy with it back then, because I was just a kid. I hadn’t learned how scientific thinking (actual, not performative) really ought to work. I trusted the accounts of those who were busy doing science, not realising that using a particular frame doesn’t always equip a person to question the frame or to develop better frames when that one starts to reach its limits.
Does this universe really look to you like it conforms to a speed prior? This universe doesn’t care at all about runtime. (it can indeed only be simulated very lossily) (one of the only objections I still take seriously, is that subjects within a lossy simulation of a universe, optimised for answering certain questions which don’t closely concern the minutia of their thoughts, might have far lower experiential-measure than actual physical people, so perhaps although they are far more numerous than natural people, it may still work out to be unlikely to be one of them.)
I could say more about the rest of that, but it doesn’t really matter whether we believe the simulation hypothesis today.