For the purposes of this discussion it’s probably easier to get rid of the “alien” bit and just talk about humans. For instance, consider this thought experiment (discussed here by Bostrom):
A firm plan was formed to rear humans in two batches: the first batch to be of three humans of one sex, the second of five thousand of the other sex. The plan called for rearing the first batch in one century. Many centuries later, the five thousand humans of the other sex would be reared. Imagine that you learn you’re one of the humans in question. You don’t know which centuries the plan specified, but you are aware of being female. You very reasonably conclude that the large batch was to be female, almost certainly. If adopted by every human in the experiment, the policy of betting that the large batch was of the same sex as oneself would yield only three failures and five thousand successes. . . . [Y]ou mustn’t say: ‘My genes are female, so I have to observe myself to be female, no matter whether the female batch was to be small or large. Hence I can have no special reason for believing it was to be large.’ (Ibid. pp. 222–3)
I’m curious about whether you agree or disagree with the reasoning here (and more generally with the rest of Bostrom’s reasoning in the chapter I linked).
To respond to your points more specifically: I don’t think your attempted analogy is correct; here’s the replacement I’d use:
Consider the group of all possible alien civilisations (respectively: all living humans).
Everyone in the reference class is either in a grabby universe or not (respectively: is either going to die this year or not).
Those who are in grabby universes are more likely to be early (respectively: those who will survive this year are more likely to be young).
When you observe how early you are, you should think you’re more likely to be in a grabby universe (respectively: when you observe how young you are, you should update that you’re more likely to survive).
First and foremost, I haven’t thought about it very much :)
I admit, the Bostrom book arguments you cite do seem intuitively compelling. Also, if Bostrom and lots of other reasonable people think SSA is sound, I guess I’m somewhat reluctant to disagree.
(However, I thought the “grabby aliens” argument was NOT based on SSA, and in fact is counter to SSA, because they’re not weighing the alien civilizations by their total populations?)
On the other hand, I find the following argument equally compelling:
Alice walks up to me and says, “Y’know, I was just reading, it turns out that local officials in China have weird incentives related to population reporting, and they’ve been cooking the books for years. It turns out that the real population of China is more like 1.9B than 1.4B!” I would have various good reasons to believe Alice here, and various other good reasons to disbelieve Alice. But “The fact that I am not Chinese” does not seem like a valid reason to disbelieve Alice!!
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else. And ditto with the grabby aliens argument versus “everything humanity knows pertaining to astrobiology”. Maybe these anthropic-argument thought experiments are getting a lot of mileage out of the fact that there’s no other information whatsoever to go on, and so we need to cling for dear life to any thread of evidence we can find, and maybe that’s just not the usual situation for thinking about things, given that we do in fact know the laws of physics and so on. (I don’t know if that argument holds up to scrutiny, it’s just something that occurred to me just now.) :-)
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else.
Thanks for making this point and connecting it to that post. I’ve been thinking that something like this might be the right way to think about a lot of this anthropics stuff — yes, we should use anthropic reasoning to inform our priors, but also we shouldn’t be afraid to update on all the detailed data we do have. (And some examples of anthropics-informed reasoning seem not to do enough of that updating.)
On the other hand, I find the following argument equally compelling
The argument you discuss an example of very weak anthropic evidence, so I don’t think it’s a good intuition pump about the validity of anthropic reasoning in general. In general anthropic evidence can be quite strong—the presumptuous philosopher thought experiment, for instance, argues for an update of a trillion to one.
However, I thought the “grabby aliens” argument was NOT based on SSA, and in fact is counter to SSA, because they’re not weighing the alien civilizations by their total populations?
I think there’s a terminological confusion here. People sometimes talk about SSA vs SIA, but in Bostrom’s terminology the two options for anthropic reasoning are SSA + SIA, or SSA + not-SIA. So in Bostrom’s terminology, every time you’re doing anthropic reasoning, you’re accepting SSA; and the main reason I linked his chapter was just to provide intuitions about why anthropic reasoning is valuable, not as an argument against SIA. (In fact, the example I quoted above has the same outcome regardless of whether you accept or reject SIA, because the population size is fixed.)
I don’t know whether Hanson is using SIA or not; the previous person who’s done similar work tried both possibilities. But either would be fine, because anthropic reasoning has basically been solved by UDT, in a way which dissolves the question of whether or not to accept SIA—as explained by Stuart Armstrong here.
For the purposes of this discussion it’s probably easier to get rid of the “alien” bit and just talk about humans. For instance, consider this thought experiment (discussed here by Bostrom):
I’m curious about whether you agree or disagree with the reasoning here (and more generally with the rest of Bostrom’s reasoning in the chapter I linked).
To respond to your points more specifically: I don’t think your attempted analogy is correct; here’s the replacement I’d use:
Consider the group of all possible alien civilisations (respectively: all living humans).
Everyone in the reference class is either in a grabby universe or not (respectively: is either going to die this year or not).
Those who are in grabby universes are more likely to be early (respectively: those who will survive this year are more likely to be young).
When you observe how early you are, you should think you’re more likely to be in a grabby universe (respectively: when you observe how young you are, you should update that you’re more likely to survive).
Thanks!!
First and foremost, I haven’t thought about it very much :)
I admit, the Bostrom book arguments you cite do seem intuitively compelling. Also, if Bostrom and lots of other reasonable people think SSA is sound, I guess I’m somewhat reluctant to disagree.
(However, I thought the “grabby aliens” argument was NOT based on SSA, and in fact is counter to SSA, because they’re not weighing the alien civilizations by their total populations?)
On the other hand, I find the following argument equally compelling:
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else. And ditto with the grabby aliens argument versus “everything humanity knows pertaining to astrobiology”. Maybe these anthropic-argument thought experiments are getting a lot of mileage out of the fact that there’s no other information whatsoever to go on, and so we need to cling for dear life to any thread of evidence we can find, and maybe that’s just not the usual situation for thinking about things, given that we do in fact know the laws of physics and so on. (I don’t know if that argument holds up to scrutiny, it’s just something that occurred to me just now.) :-)
Thanks for making this point and connecting it to that post. I’ve been thinking that something like this might be the right way to think about a lot of this anthropics stuff — yes, we should use anthropic reasoning to inform our priors, but also we shouldn’t be afraid to update on all the detailed data we do have. (And some examples of anthropics-informed reasoning seem not to do enough of that updating.)
FWIW this has also been my suspicion for a while.
The argument you discuss an example of very weak anthropic evidence, so I don’t think it’s a good intuition pump about the validity of anthropic reasoning in general. In general anthropic evidence can be quite strong—the presumptuous philosopher thought experiment, for instance, argues for an update of a trillion to one.
I think there’s a terminological confusion here. People sometimes talk about SSA vs SIA, but in Bostrom’s terminology the two options for anthropic reasoning are SSA + SIA, or SSA + not-SIA. So in Bostrom’s terminology, every time you’re doing anthropic reasoning, you’re accepting SSA; and the main reason I linked his chapter was just to provide intuitions about why anthropic reasoning is valuable, not as an argument against SIA. (In fact, the example I quoted above has the same outcome regardless of whether you accept or reject SIA, because the population size is fixed.)
I don’t know whether Hanson is using SIA or not; the previous person who’s done similar work tried both possibilities. But either would be fine, because anthropic reasoning has basically been solved by UDT, in a way which dissolves the question of whether or not to accept SIA—as explained by Stuart Armstrong here.