Simulation and computer graphics expert here. I have some serious issues with your characterization of the computational complexity of advanced simulations.
The simulation hypothesis runs into serious problems, both in general and as an explanation of the Great Filter in particular. First, . ..
First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea—a huge waste of resources. Simulations have many uses, but in general simulations are a special case of general inference where we have a simple model of the system dynamics (the physics), combined with some sparse approximate, noisy, and partial knowledge of the system’s past trajectory, and we are interested in modeling some incredibly tiny restricted subset of the future trajectory for some sparse subset of system variables.
For example in computer graphics, we only simulate light paths that will actually reach the camera, rather than all photon paths, and we can use advanced hierarchical approximations of cones/packets of related photons rather than individual photons. Using these techniques, we are already getting close—with just 2015 GPU technology—to real time photorealistic simulation of light transport for fully voxelized scenes using sparse multiscale approximations with octrees.
The optimal approximation techniques use hierachical multiscale expansion of space-time combined with bidirectional inference (which bidirectional path tracing is a special case of). The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.
The limits of optimal approximation appear to be linear in observer complexity—using output sensitive algorithms. Thus to first approximation, the resources required to simulate a world with a single human-intelligence observer is just close to the complexity of simulating the observer’s brain.
Furthermore, we have strong reasons to suspect that there are numerous ways to compress brain circuitry, reuse subcomputations, and otherwise optimize simulated neural circuits such that the optimal simulation of something like a human brain is far more efficient than simulating every synapse—but that doesn’t even really matter because the amount of computation required to simulate 10 billion human brains at the synapse level is tiny compared to realistic projections of the computational capabilities of future superintelligent civilization.
The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.
Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.
There may be potential ways of getting around this: for example, consider a simulator interested primarily in what life on Earth is doing. The simulation would not need to do a detailed simulation of the inside of planet Earth and other large bodies in the solar system. However, even then, the resources involved would be very large.
We already can simulate entire planets using the tiny resources of today’s machines. I myself have created several SOTA real-time planetary renderers back in the day. Using multiscale approximation the size of the simulated universe is completely irrelevant. This is so hard for some people to understand because they tend to think of simulations on regular linear grids rather than simulations on irregular domains such as octrees or on regular but nonlinear adapted grids or on irregular sparse sets or combinations thereof. If you haven’t really studied the simulation related branches of comp sci, it is incredibly difficult to even remotely estimate the limits of what is possible.
First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea—a huge waste of resources.
We haven’t seen anything like evidence that our laws of physics are only approximations at all. If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.
The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.
And our simulating entities would be able to tell that someone was doing a deliberate experiment how?
The limits of optimal approximation appear to be linear in observer complexity—using output sensitive algorithms.
I’m not sure what you mean by this. Can you expand?
The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.
Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.
Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.
We already can simulate entire planets using the tiny resources of today’s machines. I myself have created several SOTA real-time planetary renderers back in the day.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don’t explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn’t really help explain the Great Filter much at all.
We haven’t seen anything like evidence that our laws of physics are only approximations at all.
And we shouldn’t expect to, as that is an inherent contradiction. Any approximation crappy enough that we can detect it doesn’t work as a simulation—it diverges vastly from reality.
Maybe we live in a simulation, maybe not, but this is not something that we can detect. We can never prove we are in a simulation or not.
However, we can design a clever experiment that would at least prove that it is rather likely that we live in a simulation: we can create our own simulations populated with conscious observers.
On that note—go back and look at the first video game pong around 1980, and compare to the state of the art 35 years later. Now project that into the future. I’m guessing that we are a little more than half way towards Matrix style simulations which essentially prove the simulation argument (to the limited extent possible).
If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or
Depends what you mean by ‘laws of physics’. If we are in a simulation, then the code that creates our observable universe is a clever efficient approximation of some simpler (but vastly less efficient) code—the traditional ‘laws of physics’.
Of course many simulations could be of very different physics, but those are less likely to contain us. Most of the instrumental reasons to create simulations require close approximations. If you imagine the space of all physics for the universe above, it has a sharp peak around physics close to our own.
b) they are engaging in an extremely detailed simulation.
Detail is always observer relevant. We only observe a measly few tens of millions of bits per second, which is nothing for a future superintelligence.
The limits of optimal approximation appear to be linear in observer complexity—using output sensitive algorithms.
I’m not sure what you mean by this. Can you expand?
Consider simulating a universe of size N (in mass, bits, whatever) which contains M observers of complexity C each, for T simulated time units.
Using a naive regular grid algorithm (of the type most people think of), simulation requires O(N) space and O(NT) time.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time. In other words the size of the universe is irrelevant and the simulation complexity is only output dependent—focused on computing only the observers and their observations.
We already can simulate entire planets using the tiny resources of today’s machines. I myself have created several SOTA real-time planetary renderers back in the day.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
What is a neutrino such that you would presume to notice it? The simulation required to contain you—and indeed has contained you your entire life—has probably never had to instantiate a single neutrino (at least not for you in particular, although it perhaps has instantiated some now and then inside accelerators and other such equipment).
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don’t explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn’t really help explain the Great Filter much at all.
I agree that the sim arg doesn’t explain the Great Filter, but then again I’m not convinced there even is a filter. Regardless, the sim arg—if true—does significantly effect ET considerations, but not in a simple way.
Lots of aliens with lots of reasons to produce sims certainly gains strength, but models in which we are alone can also still produce lots of sims, and so on.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
I never claimed “hypothetical optimal output sensitive approximation algorithms” are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.
Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent’s observations could require forward simulation of the recent history of other agents/computers in the system.
Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).
Also, we should keep in mind that the simulator has direct access to our mental states.
Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem—verifying the solution yourself, and thus proving to yourself that you don’t live in a simulation which uses less than 10^30 flops.
Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.
We haven’t seen anything like evidence that our laws of physics are only approximations at all. If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.
It depends on what you consider a simulation. Game of Life-like cell automaton simulations are interesting in terms of having a small number of initial rules and being mathematically consistent. However, using them for large-scale project (for example, a whole planet populated with intelligent beings) would be really expensive in terms of computer power required. If the hypothetical simulators’ resources are in any way limited then for purely economic reasons the majority of emulations would be of the other kind—the ones where stuff is approximated and all kinds of shortcuts are taken.
And our simulating entities would be able to tell that someone was doing a deliberate experiment how?
Very easily—because a scientist doing an experiment talks about doing it. If the simulated beings are trying to run LHC, one can emulate the beams, the detectors, the whole accelerator down to atoms—or one can generate a collision event profile for a given detector, stick a tracing program on the scientist that waits for the moment when the scientist says “Ah… here is our data coming up” and then display the distribution on the screen in front of the scientist. The second method is quite a few orders of magnitude cheaper in terms of computer power required, and the scientist in question sees the same picture in both cases.
If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)
It doesn’t has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.
Simulation and computer graphics expert here. I have some serious issues with your characterization of the computational complexity of advanced simulations.
First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea—a huge waste of resources. Simulations have many uses, but in general simulations are a special case of general inference where we have a simple model of the system dynamics (the physics), combined with some sparse approximate, noisy, and partial knowledge of the system’s past trajectory, and we are interested in modeling some incredibly tiny restricted subset of the future trajectory for some sparse subset of system variables.
For example in computer graphics, we only simulate light paths that will actually reach the camera, rather than all photon paths, and we can use advanced hierarchical approximations of cones/packets of related photons rather than individual photons. Using these techniques, we are already getting close—with just 2015 GPU technology—to real time photorealistic simulation of light transport for fully voxelized scenes using sparse multiscale approximations with octrees.
The optimal approximation techniques use hierachical multiscale expansion of space-time combined with bidirectional inference (which bidirectional path tracing is a special case of). The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.
The limits of optimal approximation appear to be linear in observer complexity—using output sensitive algorithms. Thus to first approximation, the resources required to simulate a world with a single human-intelligence observer is just close to the complexity of simulating the observer’s brain.
Furthermore, we have strong reasons to suspect that there are numerous ways to compress brain circuitry, reuse subcomputations, and otherwise optimize simulated neural circuits such that the optimal simulation of something like a human brain is far more efficient than simulating every synapse—but that doesn’t even really matter because the amount of computation required to simulate 10 billion human brains at the synapse level is tiny compared to realistic projections of the computational capabilities of future superintelligent civilization.
Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.
We already can simulate entire planets using the tiny resources of today’s machines. I myself have created several SOTA real-time planetary renderers back in the day. Using multiscale approximation the size of the simulated universe is completely irrelevant. This is so hard for some people to understand because they tend to think of simulations on regular linear grids rather than simulations on irregular domains such as octrees or on regular but nonlinear adapted grids or on irregular sparse sets or combinations thereof. If you haven’t really studied the simulation related branches of comp sci, it is incredibly difficult to even remotely estimate the limits of what is possible.
We haven’t seen anything like evidence that our laws of physics are only approximations at all. If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.
And our simulating entities would be able to tell that someone was doing a deliberate experiment how?
I’m not sure what you mean by this. Can you expand?
Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don’t explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn’t really help explain the Great Filter much at all.
And we shouldn’t expect to, as that is an inherent contradiction. Any approximation crappy enough that we can detect it doesn’t work as a simulation—it diverges vastly from reality.
Maybe we live in a simulation, maybe not, but this is not something that we can detect. We can never prove we are in a simulation or not.
However, we can design a clever experiment that would at least prove that it is rather likely that we live in a simulation: we can create our own simulations populated with conscious observers.
On that note—go back and look at the first video game pong around 1980, and compare to the state of the art 35 years later. Now project that into the future. I’m guessing that we are a little more than half way towards Matrix style simulations which essentially prove the simulation argument (to the limited extent possible).
Depends what you mean by ‘laws of physics’. If we are in a simulation, then the code that creates our observable universe is a clever efficient approximation of some simpler (but vastly less efficient) code—the traditional ‘laws of physics’.
Of course many simulations could be of very different physics, but those are less likely to contain us. Most of the instrumental reasons to create simulations require close approximations. If you imagine the space of all physics for the universe above, it has a sharp peak around physics close to our own.
Detail is always observer relevant. We only observe a measly few tens of millions of bits per second, which is nothing for a future superintelligence.
Consider simulating a universe of size N (in mass, bits, whatever) which contains M observers of complexity C each, for T simulated time units.
Using a naive regular grid algorithm (of the type most people think of), simulation requires O(N) space and O(NT) time.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time. In other words the size of the universe is irrelevant and the simulation complexity is only output dependent—focused on computing only the observers and their observations.
What is a neutrino such that you would presume to notice it? The simulation required to contain you—and indeed has contained you your entire life—has probably never had to instantiate a single neutrino (at least not for you in particular, although it perhaps has instantiated some now and then inside accelerators and other such equipment).
I agree that the sim arg doesn’t explain the Great Filter, but then again I’m not convinced there even is a filter. Regardless, the sim arg—if true—does significantly effect ET considerations, but not in a simple way.
Lots of aliens with lots of reasons to produce sims certainly gains strength, but models in which we are alone can also still produce lots of sims, and so on.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
Dibs on the Millennium Prize?
I never claimed “hypothetical optimal output sensitive approximation algorithms” are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.
Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent’s observations could require forward simulation of the recent history of other agents/computers in the system.
Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).
Also, we should keep in mind that the simulator has direct access to our mental states.
Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem—verifying the solution yourself, and thus proving to yourself that you don’t live in a simulation which uses less than 10^30 flops.
Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.
It depends on what you consider a simulation. Game of Life-like cell automaton simulations are interesting in terms of having a small number of initial rules and being mathematically consistent. However, using them for large-scale project (for example, a whole planet populated with intelligent beings) would be really expensive in terms of computer power required. If the hypothetical simulators’ resources are in any way limited then for purely economic reasons the majority of emulations would be of the other kind—the ones where stuff is approximated and all kinds of shortcuts are taken.
Very easily—because a scientist doing an experiment talks about doing it. If the simulated beings are trying to run LHC, one can emulate the beams, the detectors, the whole accelerator down to atoms—or one can generate a collision event profile for a given detector, stick a tracing program on the scientist that waits for the moment when the scientist says “Ah… here is our data coming up” and then display the distribution on the screen in front of the scientist. The second method is quite a few orders of magnitude cheaper in terms of computer power required, and the scientist in question sees the same picture in both cases.
It doesn’t has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.
Yes, but at that point this becomes a completely unfalsifiable or evaluatable claim and even less relevant to Filtration concerns.