Ah, I’ve read that article before. From what I understood, they essentially conclude “Here’s a way we could tell the difference if we were simulated with system X. However, it’s unlikely that we would be simulated with system X.” without giving all that much evidence concerning other possible simulation systems.
Personally, I hold the belief that if 1) we are a simulation and 2) the simulation will not be stopped at some near point in time, then we will eventually discover the fact that we are running in a simulated universe and begin learning about the “outside”, by reasoning that:
Running simulations of other universes at a rate slower than one’s own universe defeats the purpose of most plausible reasons to run the simulation.
If we are running faster than the Simulators, then our own intelligence and information capabilities will eventually exceed theirs, which, if also given that they are aware of our existence, is likely to be part of the very purpose of the simulation.
If given that we become more intelligent than them, it becomes increasingly likely that we will outsmart (perhaps accidentally) any safety measures they might take or heuristics built into the program, since they won’t be able to understand what we’re doing anymore (presumably).
However, I doubt we’ll find this by noticing any discrepancy in the resolution of the simulation in different parts of it.
Ah, I’ve read that article before. From what I understood, they essentially conclude “Here’s a way we could tell the difference if we were simulated with system X. However, it’s unlikely that we would be simulated with system X.” without giving all that much evidence concerning other possible simulation systems.
Personally, I hold the belief that if 1) we are a simulation and 2) the simulation will not be stopped at some near point in time, then we will eventually discover the fact that we are running in a simulated universe and begin learning about the “outside”, by reasoning that:
Running simulations of other universes at a rate slower than one’s own universe defeats the purpose of most plausible reasons to run the simulation.
If we are running faster than the Simulators, then our own intelligence and information capabilities will eventually exceed theirs, which, if also given that they are aware of our existence, is likely to be part of the very purpose of the simulation.
If given that we become more intelligent than them, it becomes increasingly likely that we will outsmart (perhaps accidentally) any safety measures they might take or heuristics built into the program, since they won’t be able to understand what we’re doing anymore (presumably).
However, I doubt we’ll find this by noticing any discrepancy in the resolution of the simulation in different parts of it.