Not much. I have initially considered this thread “not worth getting into” as @avturchin’s line of reasoning is based on multiple different small confusions, addressing each of which would be a huge chore and is only tangetially relevant to the topic of the post, in the first place. I agree with this assessment today. But I will present the general outline of what is wrong with it for you and the future readers.
First of all, Gott’s version of DA is different from the version of DA, I’m talking about in this post. Its a different mathematical model, that is based on the number of years humanity exists, instead of number of humans and returns a different estimate for extinction: 97.5% confidence for extinction in the next 8 million years, assuming that humanity existed for 200000 years, regardless of birthrates. Suffice to say, these two version of DA produce different predictions, and by shifting some free parameters in the models we can get even more different predictions still. This is completely expected if DA arguments are wrong.
Likewise Laplace sunrise is yet another mathematical model and a certain interpretation of it produces vaguely similar result to Gott’s version of DA. Assuming LS being applicable, this isn’t really an argument in favor of GDA or by kind of anthropic reasoning. Imagine if the correct answer to a test question is 1⁄5002, while your reasoning, which makes an extra assumption, produces an answer 1⁄5000. Clearly, it doesn’t mean that your reasoning is correct, nor justifies the extra assumption.
And then there is a whole different question of applicability of LS to the situation at hand. Which also doesn’t fully capture our knowledge state, but at least it’s less wrong, in a sense, as it doesn’t make the particular mistake which I’m talking about in this post.
Not much. I have initially considered this thread “not worth getting into” as @avturchin’s line of reasoning is based on multiple different small confusions, addressing each of which would be a huge chore and is only tangetially relevant to the topic of the post, in the first place. I agree with this assessment today. But I will present the general outline of what is wrong with it for you and the future readers.
First of all, Gott’s version of DA is different from the version of DA, I’m talking about in this post. Its a different mathematical model, that is based on the number of years humanity exists, instead of number of humans and returns a different estimate for extinction: 97.5% confidence for extinction in the next 8 million years, assuming that humanity existed for 200000 years, regardless of birthrates. Suffice to say, these two version of DA produce different predictions, and by shifting some free parameters in the models we can get even more different predictions still. This is completely expected if DA arguments are wrong.
Likewise Laplace sunrise is yet another mathematical model and a certain interpretation of it produces vaguely similar result to Gott’s version of DA. Assuming LS being applicable, this isn’t really an argument in favor of GDA or by kind of anthropic reasoning. Imagine if the correct answer to a test question is 1⁄5002, while your reasoning, which makes an extra assumption, produces an answer 1⁄5000. Clearly, it doesn’t mean that your reasoning is correct, nor justifies the extra assumption.
And then there is a whole different question of applicability of LS to the situation at hand. Which also doesn’t fully capture our knowledge state, but at least it’s less wrong, in a sense, as it doesn’t make the particular mistake which I’m talking about in this post.