I’ve realized I’m somewhat skeptical of the simulation argument.
The simulation argument proposed by Bostrom argued, roughly, that either almost exactly all Earth-like worlds don’t reach a posthuman level, almost exactly all such civilizations don’t go on to build many simulations, or that we’re almost certainly in a simulation.
Now, if we knew that the only two sorts of creatures that experience what we experience are either in simulations or the actual, original, non-simulated Earth, then I can see why the argument would be reasonable. However, I don’t know how we could know this.
For example, consider zoos: Perhaps advanced aliens create “zoos” featuring humans in an Earth-like world, for their own entertainment or other purposes. These wouldn’t necessarily be simulations of any actual other planet, but might merely have been inspired by actual planets. Similarly, lions in the zoo are similar to lions in the wild, and their enclosure features plants and other environmental feature similar to what they would experience in the wild. But I wouldn’t call lions in zoos simulations of wild lions, even if the developed parts where humans could view them was completely invisible to them and their enclosure was arbitrarily large.
Similarly, consider games: Perhaps aliens create games or something like them set in Earth-like worlds that aren’t actually intended to be simulations of any particle world. Similarly, human fantasy RPGs often have a medieval theme, so maybe aliens would create games set in a modern-Earth-like world, without having in mind any actual planet to simulate.
Now, you could argue that in an infinite universe, these things are all actually simulations, because there must be some actual, non-simulated world that’s just like the “zoo” or game. However, by that reasoning, you could argue that a rock you pick up is nothing but a “rock simulation” because you know there is at least one other rock in the universe with the exact same configuration and environment as the rock you’re holding. That doesn’t seem right to me.
Similarly, you could say, then, that I’m actually in a simulation right now. Because even if I’m in the original Earth, there is some other Chantiel in the universe in a situation identical to my current one, who is logically constrained to do the same thing I do, so thus I am a simulation of her. And my environment is thus a simulation of hers.
Now, if we knew that the only two sorts of creatures that experience what we experience are either in simulations or the actual, original, non-simulated Earth, then I can see why the argument would be reasonable. However, I don’t know how we could know this.
For example, consider zoos: Perhaps advanced aliens create “zoos” featuring humans in an Earth-like world, for their own entertainment or other purposes.
This falls under either #1 or #2, since you don’t say what human capabilities are in the zoo or explain how exactly this zoo situation matters to running simulations; do we go extinct at some time long in the future when our zookeepers stop keeping us alive (and “go extinct before reaching a “posthuman” stage”), having never become powerful zookeeper-level civs ourselves, or are we not permitted to (“extremely unlikely to run a significant number of simulations”)?
Similarly, consider games: Perhaps aliens create games or something like them set in Earth-like worlds that aren’t actually intended to be simulations of any particle world.
This is just fork #3: “we are in a simulation”. At no point does fork #3 require it to be an exact true perfect-fidelity simulation of an actual past, and he is explicit that the minds in the simulation may be only tenuously related to ‘real’/historical minds; if aliens would be likely to create Earth-like worlds, for any reason, that’s fine because that’s what necessary, because we observe an Earth-like world (see the indifference principle section).
he is explicit that the minds in the simulation may be only tenuously related to ‘real’/historical minds;
Oh, I guess I missed this. Do you know where Bostrom said the “simulations” can only tenuously related to real minds? I was rereading the paper but didn’t see mention of this. I’m just surprised, because normally I don’t think zoo-like things would be considered simulations.
This falls under either #1 or #2, since you don’t say what human capabilities are in the zoo or explain how exactly this zoo situation matters to running simulations; do we go extinct at some time long in the future when our zookeepers stop keeping us alive (and “go extinct before reaching a “posthuman” stage”), having never become powerful zookeeper-level civs ourselves, or are we not permitted to (“extremely unlikely to run a significant number of simulations”)?
In case I didn’t make it clear, I’m saying that even if a significant proportion of civilization reach a post-human stage and a significant proportion of these run simulations, there would still potentially be a non-small chance of actually not being in a simulation an instead being in a game or zoo. For example, suppose each post-human civilization makes 100 proper simulations and 100 zoos. Then even if parts 1 and 2 of the simulation argument are true, you still have a 50% chance of ending up in a zoo.
“If the real Chantiel is so correlated with you that they will do what you will do, then you should believe you’re real so that the real Chantiel will believe they are real, too. This holds even if you aren’t real.”
By “real”, do you mean non-simulated? Are you saying that even if 99% of Chantiels in the universe are in simulations, then I should still believe I’m not in one? I don’t know how I could convince myself of being “real” if 99% of Chantiels aren’t.
Do you perhaps mean I should act as if I were non-simulated, rather than literally being non-simulated?
It doesn’t matter how many fake versions of you hold the wrong conclusion about their own ontological status, since those fake beliefs exist in fake versions of you. The moral harm caused by a single real Chantiel thinking they’re not real is infinitely greater than infinitely many non-real Chantiels thinking they are real.
Interesting. When you say “fake” versions of myself, do you mean simulations? If so, I’m having a hard time seeing how that could be true. Specifically, what’s wrong about me thinking I might not be “real”? I mean, if I though I was in a simulation, I think I’d do pretty much the same things I would do if I thought I wasn’t in a simulation. So I’m not sure what the moral harm is.
Do you have any links to previous discussions about this?
I am also skeptical of the simulation argument, but for different reasons.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
This either means that the Margolus–Levitin theorem is false in our universe (which would be interesting), we’re a ‘leaf’ simulation where the Margolus–Levitin theorem holds, but there’s many universes where it does not (which would also be interesting), or we have a non-zero chance of not being in a simulation.
This is essentially a justification for ‘almost exactly all such civilizations don’t go on to build many simulations’.
Call the scaling factor—of amount of computation necessary to simulate X amount of computation - C. So e.g.C=0.5 means that to simulate 1 unit of computation you need 2 units of computation. If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
There are countless ways to evade this impossibility argument, several of which are already discussed in Bostrom’s paper (I think you should reread the paper) eg. simulators can simply approximate, simulate smaller sections, tamper with observers inside the simulation, slow down the simulation, cache results like HashLife, and so on. (How do we simulate anything already...?)
All your Margolus-Levitin handwaving can do is disprove a strawman simulation along the lines of a maximally dumb pessimal 1:1 exact simulation of everything with identical numbers of observers at every level.
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
simulators can simply approximate
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
simulate smaller sections
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
tamper with observers inside the simulation
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
slow down the simulation
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
cache results like HashLife
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
(How do we simulate anything already...?)
By accepting a multiplicative slowdown per level of simulation in the infinite limit[2], and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
Reminder: you claimed:
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
The simulation argument does not require violating the M-L theorem to the extent it is superficially relevant and resembles an impossibility proof of simulations.
Are you saying that we can’t be in a simulation because our descendants might go on to build a large number of simulations themselves, requiring too many resources in the base reality? But I don’t think that weakens the argument very much, because we aren’t currently in a position to run a large number of simulations. Whoever is simulating us can just turn off/reset the simulation before that happens.
Said argument applies if we cannot recursively self-simulate, regardless of reason (Margolus–Levitin theorem, parent turning the simulation off or resetting it before we could, etc).
In order for ‘almost all’ computation to be simulated, most simulations have to be recursively self-simulating. So either we can recursively self-simulate (which would be interesting), we’re rare (which would also be interesting), or we have a non-zero chance we’re in the ‘real’ universe.
The argument is not that generic computations are likely simulated, it’s about our specific situation—being a newly intelligent species arising in an empty universe. So simulationists would take the ‘rare’ branch of your trilemma.
If you’re stating that generic intelligence was not likely simulated, but generic intelligence in our situationwaslikely simulated...
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
(As an aside, this does somewhat conflate ‘intelligence’ and ‘computation’; I am assuming that intelligence requires at least some non-zero amount of computation. It’s good to make this assumption explicit I suppose.)
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
Sure. I just think we have enough evidence to overrule the principle, in the form of sensory experiences apparently belonging to a member of a newly-arisen intelligent species. Overruling mediocrity principles with evidence is common.
I’ve realized I’m somewhat skeptical of the simulation argument.
The simulation argument proposed by Bostrom argued, roughly, that either almost exactly all Earth-like worlds don’t reach a posthuman level, almost exactly all such civilizations don’t go on to build many simulations, or that we’re almost certainly in a simulation.
Now, if we knew that the only two sorts of creatures that experience what we experience are either in simulations or the actual, original, non-simulated Earth, then I can see why the argument would be reasonable. However, I don’t know how we could know this.
For example, consider zoos: Perhaps advanced aliens create “zoos” featuring humans in an Earth-like world, for their own entertainment or other purposes. These wouldn’t necessarily be simulations of any actual other planet, but might merely have been inspired by actual planets. Similarly, lions in the zoo are similar to lions in the wild, and their enclosure features plants and other environmental feature similar to what they would experience in the wild. But I wouldn’t call lions in zoos simulations of wild lions, even if the developed parts where humans could view them was completely invisible to them and their enclosure was arbitrarily large.
Similarly, consider games: Perhaps aliens create games or something like them set in Earth-like worlds that aren’t actually intended to be simulations of any particle world. Similarly, human fantasy RPGs often have a medieval theme, so maybe aliens would create games set in a modern-Earth-like world, without having in mind any actual planet to simulate.
Now, you could argue that in an infinite universe, these things are all actually simulations, because there must be some actual, non-simulated world that’s just like the “zoo” or game. However, by that reasoning, you could argue that a rock you pick up is nothing but a “rock simulation” because you know there is at least one other rock in the universe with the exact same configuration and environment as the rock you’re holding. That doesn’t seem right to me.
Similarly, you could say, then, that I’m actually in a simulation right now. Because even if I’m in the original Earth, there is some other Chantiel in the universe in a situation identical to my current one, who is logically constrained to do the same thing I do, so thus I am a simulation of her. And my environment is thus a simulation of hers.
I think you should reread the paper.
This falls under either #1 or #2, since you don’t say what human capabilities are in the zoo or explain how exactly this zoo situation matters to running simulations; do we go extinct at some time long in the future when our zookeepers stop keeping us alive (and “go extinct before reaching a “posthuman” stage”), having never become powerful zookeeper-level civs ourselves, or are we not permitted to (“extremely unlikely to run a significant number of simulations”)?
This is just fork #3: “we are in a simulation”. At no point does fork #3 require it to be an exact true perfect-fidelity simulation of an actual past, and he is explicit that the minds in the simulation may be only tenuously related to ‘real’/historical minds; if aliens would be likely to create Earth-like worlds, for any reason, that’s fine because that’s what necessary, because we observe an Earth-like world (see the indifference principle section).
Thanks for the response, Gwern.
Oh, I guess I missed this. Do you know where Bostrom said the “simulations” can only tenuously related to real minds? I was rereading the paper but didn’t see mention of this. I’m just surprised, because normally I don’t think zoo-like things would be considered simulations.
In case I didn’t make it clear, I’m saying that even if a significant proportion of civilization reach a post-human stage and a significant proportion of these run simulations, there would still potentially be a non-small chance of actually not being in a simulation an instead being in a game or zoo. For example, suppose each post-human civilization makes 100 proper simulations and 100 zoos. Then even if parts 1 and 2 of the simulation argument are true, you still have a 50% chance of ending up in a zoo.
Does this make sense?
[edited]
By “real”, do you mean non-simulated? Are you saying that even if 99% of Chantiels in the universe are in simulations, then I should still believe I’m not in one? I don’t know how I could convince myself of being “real” if 99% of Chantiels aren’t.
Do you perhaps mean I should act as if I were non-simulated, rather than literally being non-simulated?
[edited]
Interesting. When you say “fake” versions of myself, do you mean simulations? If so, I’m having a hard time seeing how that could be true. Specifically, what’s wrong about me thinking I might not be “real”? I mean, if I though I was in a simulation, I think I’d do pretty much the same things I would do if I thought I wasn’t in a simulation. So I’m not sure what the moral harm is.
Do you have any links to previous discussions about this?
Interesting.
I am also skeptical of the simulation argument, but for different reasons.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
This either means that the Margolus–Levitin theorem is false in our universe (which would be interesting), we’re a ‘leaf’ simulation where the Margolus–Levitin theorem holds, but there’s many universes where it does not (which would also be interesting), or we have a non-zero chance of not being in a simulation.
This is essentially a justification for ‘almost exactly all such civilizations don’t go on to build many simulations’.
A fundamental limit on computation: ≤6∗1033operations/second/Joule
Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.
Call the scaling factor—of amount of computation necessary to simulate X amount of computation - C. So e.g.C=0.5 means that to simulate 1 unit of computation you need 2 units of computation. If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
There are countless ways to evade this impossibility argument, several of which are already discussed in Bostrom’s paper (I think you should reread the paper) eg. simulators can simply approximate, simulate smaller sections, tamper with observers inside the simulation, slow down the simulation, cache results like HashLife, and so on. (How do we simulate anything already...?)
All your Margolus-Levitin handwaving can do is disprove a strawman simulation along the lines of a maximally dumb pessimal 1:1 exact simulation of everything with identical numbers of observers at every level.
I should probably reread the paper.
That being said:
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
By accepting a multiplicative slowdown per level of simulation in the infinite limit[2], and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
No, it...
No, it...
No, it...
Reminder: you claimed:
The simulation argument does not require violating the M-L theorem to the extent it is superficially relevant and resembles an impossibility proof of simulations.
Are you saying that we can’t be in a simulation because our descendants might go on to build a large number of simulations themselves, requiring too many resources in the base reality? But I don’t think that weakens the argument very much, because we aren’t currently in a position to run a large number of simulations. Whoever is simulating us can just turn off/reset the simulation before that happens.
Said argument applies if we cannot recursively self-simulate, regardless of reason (Margolus–Levitin theorem, parent turning the simulation off or resetting it before we could, etc).
In order for ‘almost all’ computation to be simulated, most simulations have to be recursively self-simulating. So either we can recursively self-simulate (which would be interesting), we’re rare (which would also be interesting), or we have a non-zero chance we’re in the ‘real’ universe.
The argument is not that generic computations are likely simulated, it’s about our specific situation—being a newly intelligent species arising in an empty universe. So simulationists would take the ‘rare’ branch of your trilemma.
Interesting.
If you’re stating that generic intelligence was not likely simulated, but generic intelligence in our situation was likely simulated...
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
(As an aside, this does somewhat conflate ‘intelligence’ and ‘computation’; I am assuming that intelligence requires at least some non-zero amount of computation. It’s good to make this assumption explicit I suppose.)
Sure. I just think we have enough evidence to overrule the principle, in the form of sensory experiences apparently belonging to a member of a newly-arisen intelligent species. Overruling mediocrity principles with evidence is common.