Is this evidence for the Simulation hypothesis?

I haven’t come across this particular argument before, so I hope I’m not just rehashing a well-known problem.

“The universe displays some very strong signs that it is a simulation.

As has been mentioned in some other answers, one way to efficiently achieve a high fidelity simulation is to design it in such a way that you only need to compute as much detail as is needed. If someone takes a cursory glance at something you should only compute its rough details and only when someone looks at it closely, with a microscope say, do you need to fill in the details.

This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn’t want to have to compute x to arbitrary precision. They’d like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.

Let’s spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?

It’s the standard textbook definition of a Continuous function. We humans invented the notion of continuity because it was an ubiquitous property of functions in the physical world. But it’s precisely the property you need to implement a simulation with demand-driven level of detail. All of our fundamental physics is based on equations that evolve continuously over time and so are optimised for demand-driven implementation.

One way of looking at this is that if y=f(x), then if you want to compute n digits of y you only need a finite number of digits of x. This has another amazing advantage: if you only ever display things to a given accuracy you only ever need to compute your real numbers to a finite accuracy. Nature could have chosen to use any number of arbitrarily complicated functions on the reals. But in fact we only find functions with the special property that they need only be computed to finite precision. This is precisely what a smart programmer would have implemented.

(This also helps motivate the use of real numbers. The basic operations on real numbers such as addition and multiplication are continuous and require only finite precision in their arguments to compute their values to finite precision. So real numbers give a really neat way to allow inhabitants to find ever more detail within a simulation without putting an undue burden on its implementation.)

But you can do one step further. As Gregory Benford says in Timescape: “nature seemed to like equations stated in covariant differential forms”. Our fundamental physical quantities aren’t just continuous, they’re differentiable. Differentiability means that if y=f(x) then once you zoom in closely enough, y depends linearly on x. This means that one more digit of y requires precisely one more digit of x. In other words our hypothetical programmer has arranged things so that after some initial finite length segment they can know in advance exactly how much data they are going to need.

After all that, I don’t see how we can know we’re not in a simulation. Nature seems cleverly designed to make a demand-driven simulation of it as efficient as possible.”

http://​​www.quora.com/​​How-do-we-know-that-were-not-living-in-a-computer-simulation/​​answer/​​Dan-Piponi