Suppose a doomsday scenario (whichever one you prefer) comes to pass, and wipes out 99.999999975% of humanity. The last two living humans cower in a bunker and discuss.
“If we imagine ourselves assigned randomly among all of the humans who ever lived, the odds are extremely low that we would by chance happen to be the last two, therefore we must expect another hundred billion or so humans to come after us, to make our place in line unremarkable. Statistically speaking that can’t be the final apocalypse outside.”
One of them, comforted to learn that humanity has a long bright future still ahead of it, leaves the bunker and is immediately dissolved by the ever-encroaching tide of grey goo. The other is quietly amazed by their odds of being not just among the last two but the actual last. But not for very long.
(I don’t think the above holds as actually valid, I really just intend to illustrate that anthropic reasoning seems very difficult to do correctly)
Major lotteries can have odds of more than 100 million to one against winning the jackpot. Nevertheless, ordinary people who win such a jackpot have no difficulty in discovering that they have won and collecting. It is literally news (i.e. it gets into the news-papers) when a major lottery prize goes unclaimed for as little as a week.
Ordinary people are thus capable of using observations to make correct updates against 100 million to one priors. If it happens, it is possible. Therefore arguments by Smart People that it is impossible can be ignored. The ordinary person in that situation wins; the Smart Person loses. The ordinary person — for example, Elon Musk — can easily believe that humanity can have a glorious future among the stars, and work to make that possible. He ignores the Very Smart Argument that the more glorious that future, the less likely we are to be around at its start. The Smart Person, unable to believe any such thing, achieves nothing.
As others said, this is the classic Doomsday Argument, and it’s a misapplication of probability, since the calculation lacks predictability: your estimate always yields the same doomsday prediction regardless of whether it is made by Adam and Eve, by us, or by a meta-galactic civilization a billion years from now.
This is the doomsday argument.
Suppose a doomsday scenario (whichever one you prefer) comes to pass, and wipes out 99.999999975% of humanity. The last two living humans cower in a bunker and discuss.
“If we imagine ourselves assigned randomly among all of the humans who ever lived, the odds are extremely low that we would by chance happen to be the last two, therefore we must expect another hundred billion or so humans to come after us, to make our place in line unremarkable. Statistically speaking that can’t be the final apocalypse outside.”
One of them, comforted to learn that humanity has a long bright future still ahead of it, leaves the bunker and is immediately dissolved by the ever-encroaching tide of grey goo. The other is quietly amazed by their odds of being not just among the last two but the actual last. But not for very long.
(I don’t think the above holds as actually valid, I really just intend to illustrate that anthropic reasoning seems very difficult to do correctly)
Major lotteries can have odds of more than 100 million to one against winning the jackpot. Nevertheless, ordinary people who win such a jackpot have no difficulty in discovering that they have won and collecting. It is literally news (i.e. it gets into the news-papers) when a major lottery prize goes unclaimed for as little as a week.
Ordinary people are thus capable of using observations to make correct updates against 100 million to one priors. If it happens, it is possible. Therefore arguments by Smart People that it is impossible can be ignored. The ordinary person in that situation wins; the Smart Person loses. The ordinary person — for example, Elon Musk — can easily believe that humanity can have a glorious future among the stars, and work to make that possible. He ignores the Very Smart Argument that the more glorious that future, the less likely we are to be around at its start. The Smart Person, unable to believe any such thing, achieves nothing.
As others said, this is the classic Doomsday Argument, and it’s a misapplication of probability, since the calculation lacks predictability: your estimate always yields the same doomsday prediction regardless of whether it is made by Adam and Eve, by us, or by a meta-galactic civilization a billion years from now.