If you taboo “anthropics” and replace by “observation selection effects” then there are all sorts of practical consequences. See the start of Nick Bostrom’s book for some examples.
The other big reason for caring is the “Doomsday argument” and the fact that all attempts to refute it have so far failed. Almost everyone who’s heard of the argument thinks there’s something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom’s book. Further, alternative approaches to anthropics (such as the “self indication assumption”), or attempts to completely bypass anthropics (such as “full non-indexical conditioning”), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace’s theisis.
Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news—our descendants will evolve into something different soon enough. That’s not much of a “Doomsday”.
I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
We seem pretty damn close to me! A decade or so is not very long.
(Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
More likely that there are a range of historical “tipping points” that they might want to explore—perhaps including the invention of language and the origin of humans.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless—but any average of number of simulations run per-world would probably be low—since so many sims would be leaf nodes—and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller—and less likely to contain many observers.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially).
What substrate are they running these simulations on?
I had another look at Tallinn’s presentation, and it seems he is rather vague on this… rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom’s original simulation argument provides some lower bounds—and references—on what could be done using just classical computation.
I don’t need a refutation. The doomsday argument doesn’t affect anything I can or will do. I simply don’t care about it. It’s like a claim that I will probably be eaten at any point in the next 100 years by a random giant tiger.
If you taboo “anthropics” and replace by “observation selection effects” then there are all sorts of practical consequences. See the start of Nick Bostrom’s book for some examples.
The other big reason for caring is the “Doomsday argument” and the fact that all attempts to refute it have so far failed. Almost everyone who’s heard of the argument thinks there’s something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom’s book. Further, alternative approaches to anthropics (such as the “self indication assumption”), or attempts to completely bypass anthropics (such as “full non-indexical conditioning”), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace’s theisis.
Jaan Tallinn’s attempt: Why Now? A Quest in Metaphysics. The “Doomsday argument” is far from certain.
Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news—our descendants will evolve into something different soon enough. That’s not much of a “Doomsday”.
I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
We seem pretty damn close to me! A decade or so is not very long.
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
More likely that there are a range of historical “tipping points” that they might want to explore—perhaps including the invention of language and the origin of humans.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless—but any average of number of simulations run per-world would probably be low—since so many sims would be leaf nodes—and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller—and less likely to contain many observers.
What substrate are they running these simulations on?
I had another look at Tallinn’s presentation, and it seems he is rather vague on this… rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom’s original simulation argument provides some lower bounds—and references—on what could be done using just classical computation.
Well. The claims that it’s relevant to our current information state have been refuted pretty well.
Citation needed (please link to a refutation).
I’m not aware of any really good treatments. I can link to myself claiming that I’m right, though. :D
I think there may be a selection effect—once the doomsday argument seems not very exciting, you’re less likely to talk about it.
The doomsday argument is itself anthropic thinking of the most useless sort.
Citation needed (please link to a refutation).
I don’t need a refutation. The doomsday argument doesn’t affect anything I can or will do. I simply don’t care about it. It’s like a claim that I will probably be eaten at any point in the next 100 years by a random giant tiger.