[ Note: I strongly agree with some parts of jbash’s answer, and strongly disagree with other parts. ]
As I understand it, Bostrom’s original argument, the one that got traction for being an actually-clever and thought-provoking discursive fork, goes as follows:
Future humans in specific, will at least one of: [ die off early, run lots of high-fidelity simulations of our universe’s history [“ancestor-simulations”], decide not to run such simulations ].
If future humans run lots of high-fidelity ancestor-simulations, then most people who subjectively experience themselves as humans living early in a veridical human history, will in fact be living in non-base-reality simulations of such realities, run by posthumans.
If one grants that our ancestors are likely to a] survive, and b] not elect to run vast numbers of ancestor-simulations [ both of which assumptions felt fairly reasonable back in the ’00s, before AI doom and the breakdown of societal coordination became such nearly felt prospects ], then we are forced to conclude that we are more likely than not living in one such ancestor-simulation, run by future humans.
It’s a valid and neat argument which breaks reality down into a few mutually-exclusive possibilities—all of which feel narratively strange—and forces you to pick your poison.
Since then, Bostrom and others have overextended, confused, and twisted this argument, in unwise attempts to turn it into some kind of all-encompassing anthropic theory. [ I Tweeted about this over the summer. ]
The valid, original version of the Simulation Hypothesis argument relies on the [plausible-seeming!] assumption that posthumans, in particular, will share our human interest in our species’ history, in particular, and our penchant for mad science. As soon as your domain of discourse extends outside the class boundaries of “future humans”, the Simulation Argument no longer says anything in particular about your anthropic situation. We have no corresponding idea what alien simulators would want, or why they would be interested in us.
Also, despite what Aynonymousprsn123 [and Bostrom!] have implied, the Simulation Hypothesis argument was never actually rooted in any assumptions about local physics. Changing our assumptions about such factors as [e.g.] the spatial infinity of our local universe, quantum decoherence, or a physical Landauer limit, doesn’t have any implications for it. [ Unless you want to argue for a physical Landauer limit so restrictive it’d be infeasible for posthumans to run any ancestor-simulations at all. ]
So, while the Simulation Hypothesis argument can imply you’re being simulated by posthumans, if and only if you strongly believe posthumans will both of [ a] not die, b] not elect against running lots of ancestor-simulations ], it can’t prove you’re being simulated in general. It’s just not that powerful.
Future humans in specific, will at least one of: [ die off early, run lots of high-fidelity simulations of our universe’s history [“ancestor-simulations”], decide not to run such simulations
Is there any specific reason the first option is “die off early” and not “be unable to run lots of high-fidelity simulations”? The latter encompasses the former as well as scenarios where future humans survive but for one reason or the other can’t run these simulations.
I think a more general argument, in my opinion, would look like this:
“Future humans will at least one of 1) be unable to run high-fidelity simulations or 2) be unwilling to run high-fidelity simulations or 3) run high-fidelity simulations.”
Yes, I think that’s a validly equivalent and more general classification. Although I’d reflect that “survive but lack the power or will to run lots of ancestor-simulations” didn’t seem like a plausible-enough future to promote it to consideration, back in the ’00s.
[ Note: I strongly agree with some parts of jbash’s answer, and strongly disagree with other parts. ]
As I understand it, Bostrom’s original argument, the one that got traction for being an actually-clever and thought-provoking discursive fork, goes as follows:
It’s a valid and neat argument which breaks reality down into a few mutually-exclusive possibilities—all of which feel narratively strange—and forces you to pick your poison.
Since then, Bostrom and others have overextended, confused, and twisted this argument, in unwise attempts to turn it into some kind of all-encompassing anthropic theory. [ I Tweeted about this over the summer. ]
The valid, original version of the Simulation Hypothesis argument relies on the [plausible-seeming!] assumption that posthumans, in particular, will share our human interest in our species’ history, in particular, and our penchant for mad science. As soon as your domain of discourse extends outside the class boundaries of “future humans”, the Simulation Argument no longer says anything in particular about your anthropic situation. We have no corresponding idea what alien simulators would want, or why they would be interested in us.
Also, despite what Aynonymousprsn123 [and Bostrom!] have implied, the Simulation Hypothesis argument was never actually rooted in any assumptions about local physics. Changing our assumptions about such factors as [e.g.] the spatial infinity of our local universe, quantum decoherence, or a physical Landauer limit, doesn’t have any implications for it. [ Unless you want to argue for a physical Landauer limit so restrictive it’d be infeasible for posthumans to run any ancestor-simulations at all. ]
So, while the Simulation Hypothesis argument can imply you’re being simulated by posthumans, if and only if you strongly believe posthumans will both of [ a] not die, b] not elect against running lots of ancestor-simulations ], it can’t prove you’re being simulated in general. It’s just not that powerful.
Is there any specific reason the first option is “die off early” and not “be unable to run lots of high-fidelity simulations”? The latter encompasses the former as well as scenarios where future humans survive but for one reason or the other can’t run these simulations.
I think a more general argument, in my opinion, would look like this:
“Future humans will at least one of 1) be unable to run high-fidelity simulations or 2) be unwilling to run high-fidelity simulations or 3) run high-fidelity simulations.”
Yes, I think that’s a validly equivalent and more general classification. Although I’d reflect that “survive but lack the power or will to run lots of ancestor-simulations” didn’t seem like a plausible-enough future to promote it to consideration, back in the ’00s.