But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that’s many realities that actually occur which don’t achieve the results they want for every one that does.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Besides, there’s no evidence that our universe is being guided according to any agent’s utility function, and if it is, it’s certainly not much like ours.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information.
It must contain the same information. It doesn’t need to contain the same rules.
Projecting the simulation with perfect accuracy is equivalent to running the simulation.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there’s only one universe, the one where the simulators got what they wanted.
Yes, you’ve made that point before. I don’t disagree with it. I’m not sure why you are bringing it up again.
It must contain the same information. It doesn’t need to contain the same rules.
This isn’t true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn’t discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.
There have been some papers trying to map out connections between the two (and I don’t know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
But at any given time you may be in a branch that’s going to be deleted or rewound because it doesn’t lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don’t want. So not only do we have no reason to suppose it’s happening, it wouldn’t be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don’t.
I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
Which are the ‘extraneous additions’?
Omniscience and omnipotence have already been discussed at length—the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.
(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)
Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.
No disagreement there.