The argument a) depends on you being a random observer and b) makes only a statistical prediction. If you are one of those early or late observers you will come to the wrong conclusions. Probability doesn’t help you at that point at all.
Also: Once you start creating more and more variants of the the same pattern (double DA, other time frames) you don’t really make the probability worse, you are doing p-hacking. That doesn’t change reality and you can’t reliably learn anything about reality.
I might be in a simulation and such checks might change my prior for it but it is quite low anyway. Like so many other strange and newfangled ways reality could be like theistic.
Yes it’s a statistical prediction. The 90% confidence interval will be correct for 90% of people who use this method. 10% will be wrong. Apriori you are 9 times more likely to be in the first group than in the second.
Once you start creating more and more variants of the the same pattern (double DA, other time frames) you don’t really make the probability worse, you are doing p-hacking.
I don’t see this as an alternative variant to fudge the numbers. To me this seems to be the correct way to do the calculation. This makes the above argument correct, that 90% of people that use this argument will be correct.
Whereas the original version assumes you are randomly given human, which is obviously incorrect. As most humans would not be born at a time where this kind of statistical knowledge exists. Just the fact that you ask the doomsday argument, shows there is something special about you, and puts you into a different reference class.
Because as I said, most humans would never even think of the doomsday argument. So the argument can’t apply to them. In order to get the mathematical guarantee that 90% of people who use the argument will be correct, you need to restrict your reference class only to people familiar with the argument.
More generally, the copernican principle is that there is nothing particularly special about this exact moment in time. But we know there is something special. The modern world is very different than the ancient world. The probability of these ideas occurring to an ancient person, are very different than to a modern person. And so any anthropic reasoning should adjust for that probability.
“The probability of these ideas occurring to an ancient person...”
In the ancient world it was very common to predict the imminent end of the world.
And in my own case, before ever having heard of the Doomsday argument, the argument occurred to me exactly in the context of thinking about the possible end of the world.
So it doesn’t seem particularly unlikely to occur to an ancient person.
That’s what I thought you meant. But Christianity has existed for less than 4% of humanity’s time, and what we ordinarily call “the ancient world” started 3000-6000 years earlier.
Double negatives exist to help hide what you’re saying. If it’s somewhat likely, show me a single clear example that predates Christianity. The story of Noah says such a flood will never happen again. The Kali Yuga was supposed to last more than 400000 years.
There two possible ways to try to rebut the DA which is using those who knows about DA as a reference class, but they still don’t work (copying my comment from similar thread about the Universe origin):
A. One possible counterargument here is following. Imagine that any being has rank X, proportional to its complexity (or year of birth). But there will be infinitely many beings which are 10X complex, 100X complex and so on. So any being with finite complexity is in the beginning of the complexity ladder. So any may be surprised if it is very early. So there is no surprise to be surprised.
But we are still should be in the middle of infinity, but our situation is not so—it looks like we have just enough complexity to start to understand the problem, which is still surprising.
B. Another similar rebuttal: imagine all being which are surprised by their position. The fact that we are in this set is resulted only from definition of the set, but not from any properties of the whole Universe. Example: All people who was born 1 January may be surpised that their birthday coincide with New Year, but it doesn’t provide them any information about length of the year.
But my birthday is randomly position inside the years (September) and in most testable cases mediocracy logic works as predicted.
“In most testable cases mediocracy logic works as predicted.”
Exactly, but it doesn’t need testing anyway, because we know that it is mathematically necessary. That is why I think the DA is true, and the fact that everyone tries to refute it just shows wishful thinking.
(And by the way, my birthday is in late June and would work well to predict the length of the year. And my current age is almost exactly half of the current average lifespan of a human.)
But QI favours the worlds there I am more able survive, so may be I will have some kinds of superpowers or will be uploaded. So I will probably able to create several friends, but not many, as it would change probability distribution in DA, and so makes this outcome less probable.
Another option is put me in the situation where my life or death is univocally connected with live of group of people (e.g. if we all in one submarine). In with case we will all survive.
This interpretation of DA puts you in the class of really intelligent observers, who are able to understand statistic, logic etc.
It helps us to solve so called reference class problem in most natural way.
It helps to exclude animals, unborn children, Neanderthals from the class of beings from which we are randomly chosen.
Unfortunately it shortens most probable time of existence of our class.
The problem with the doomsday argument is that it is a correct assignment of probabilities only if you have the very small amount of information specified in the argument. More information can change your predictions—the prediction you would make if you had less information gets overridden by the prediction that uses all your information.
Let’s use the example of picking a random tree. Suppose you know about the existence of tree-diseases that make trees sick and more likely to die, and you know that some trees are sick and some are healthy. You pick a random tree and it is ten years old and sick. You now should update your prediction of the average tree age toward 10 years, but you cannot expect that you have picked a point near the middle of this tree’s life. Because you know it is sick, you can expect it to die sooner than that.
Well I don’t have any statistics on how long civilizations last. It’s true that DA is a very naive way of estimating, but I think at this time all we can make are very naive estimates.
I think that when I add other information, like my belief in x-risk, the estimates get even worse. It really does feel like our civilization is at it’s peak and it’s all downhill from here. Between how many dangerous technologies we invent, to how much finite resources we are using up, the estimates given by DA certainly feel plausible.
As I commented on your blog, I thought of the argument myself before I heard it from anyone, and I am unlikely to be unique in that, which makes things slightly less bad, since there were probably lots of people (in absolute numbers) who thought of it.
I also tried to rise the problem of DA-aware observers before but never meet any understanding, and now we have 3 people, who seems to be speaking about the same thing.
We could name them (us) double-DA-aware-obserevers. That is the ones who know about DA and also know that DA is applicable only to observers, who knows about DA.
If we apply all the same logic of DA to this small group, we could get even worse results (and also go into eternal recursion). But it will not happened, as Carter in 1983 was part of double-DA-aware-observers, and it was 33 years ago. So even if number of double-DA-aware-observers is growing exponentially, we still may have 10-20 years before the end. (And it is in accordance with some strong AI timing expectations)
If one strong AI replace all humans or solves DA-paradox it will solve DA without total extinction.
It doesn’t influence timing of the possible catastrophe much. Most people who goes deep in the topic have read (and publish) about DA. So we could use number of the articles about DA to get known distribution of DA-aware-observers. I suspect it is exponential. This means that the medium rank-number of DA-aware-observers are near the end of the time line.
The best rebuttal I know here is that the question is not asked randomly but conditionally dependent of my position. That is when I ask “why I am so early?” I know that I am “early”, and I am not random person from all group. So all who are early will be more interested in DA and ask the question more frequently. But does it completely compensate DA?
For example people who was born 1 January may be more interested in question why their birthday is so early in the year. But it doesn’t prevent me using my birthday date to estimate the number of days in a year. (My birthday is on 243th day so the number of days in a year is less than 500 with 50 per cent probability)
I came to the same conclusion: that DA should be applied only to the people who knows about DA. And it makes it worse. There are two ways how to apply it DA-people. Using years and using number of people (rank) who know about DA. The second way is even worse as the number of people is growing exponentially, and so we are near the end of the group of people who knows about DA.
It may not mean extinction, but strong and universally accepted DA rebuttal, or drastic fail of the number of people.
Global catastrophe in next 10-50 years unfortunately seems to be most likely explanation.
(The ideas were also known to Carter in 1983 when he presented anthropic principle and already knew about DA. At this time he was the only person on Earth who knew about DA and he understood its implication for small class of people who knows about DA and it really made him worry about extinction soon. I forget where I read about it.)
I would rather see the doomsday argument as a version of sleeping beauty.
Different people appear to have different opinions on this kind of arguments. To me, the solution appears rather obvious (in restrospect):
If you ask a decision theory about advice on decisions, then there is nothing paradoxical at all, and the answer is just an obvious computation. This tells you that “probability” is the wrong concept in such situations; rather you should ask about “expected utility” only, as this is much more stable under all kind of anthropic arguments.
I wrote a thing that turned out to be too long for a comment: The Doomsday Argument is even Worse than Thought
The argument a) depends on you being a random observer and b) makes only a statistical prediction. If you are one of those early or late observers you will come to the wrong conclusions. Probability doesn’t help you at that point at all.
Also: Once you start creating more and more variants of the the same pattern (double DA, other time frames) you don’t really make the probability worse, you are doing p-hacking. That doesn’t change reality and you can’t reliably learn anything about reality.
I might be in a simulation and such checks might change my prior for it but it is quite low anyway. Like so many other strange and newfangled ways reality could be like theistic.
Yes it’s a statistical prediction. The 90% confidence interval will be correct for 90% of people who use this method. 10% will be wrong. Apriori you are 9 times more likely to be in the first group than in the second.
I don’t see this as an alternative variant to fudge the numbers. To me this seems to be the correct way to do the calculation. This makes the above argument correct, that 90% of people that use this argument will be correct.
Whereas the original version assumes you are randomly given human, which is obviously incorrect. As most humans would not be born at a time where this kind of statistical knowledge exists. Just the fact that you ask the doomsday argument, shows there is something special about you, and puts you into a different reference class.
Why?
Because as I said, most humans would never even think of the doomsday argument. So the argument can’t apply to them. In order to get the mathematical guarantee that 90% of people who use the argument will be correct, you need to restrict your reference class only to people familiar with the argument.
More generally, the copernican principle is that there is nothing particularly special about this exact moment in time. But we know there is something special. The modern world is very different than the ancient world. The probability of these ideas occurring to an ancient person, are very different than to a modern person. And so any anthropic reasoning should adjust for that probability.
“The probability of these ideas occurring to an ancient person...”
In the ancient world it was very common to predict the imminent end of the world.
And in my own case, before ever having heard of the Doomsday argument, the argument occurred to me exactly in the context of thinking about the possible end of the world.
So it doesn’t seem particularly unlikely to occur to an ancient person.
How so?
See the Gospels for examples.
That’s what I thought you meant. But Christianity has existed for less than 4% of humanity’s time, and what we ordinarily call “the ancient world” started 3000-6000 years earlier.
On the other hand fear of and end of the world (as they knew it) seems to be not unlikely at any time.
Creating reference classes as small as you like is easy. But the predictive power diminishes accordingly...
Double negatives exist to help hide what you’re saying. If it’s somewhat likely, show me a single clear example that predates Christianity. The story of Noah says such a flood will never happen again. The Kali Yuga was supposed to last more than 400000 years.
There two possible ways to try to rebut the DA which is using those who knows about DA as a reference class, but they still don’t work (copying my comment from similar thread about the Universe origin):
A. One possible counterargument here is following. Imagine that any being has rank X, proportional to its complexity (or year of birth). But there will be infinitely many beings which are 10X complex, 100X complex and so on. So any being with finite complexity is in the beginning of the complexity ladder. So any may be surprised if it is very early. So there is no surprise to be surprised.
But we are still should be in the middle of infinity, but our situation is not so—it looks like we have just enough complexity to start to understand the problem, which is still surprising.
B. Another similar rebuttal: imagine all being which are surprised by their position. The fact that we are in this set is resulted only from definition of the set, but not from any properties of the whole Universe. Example: All people who was born 1 January may be surpised that their birthday coincide with New Year, but it doesn’t provide them any information about length of the year.
But my birthday is randomly position inside the years (September) and in most testable cases mediocracy logic works as predicted.
“In most testable cases mediocracy logic works as predicted.”
Exactly, but it doesn’t need testing anyway, because we know that it is mathematically necessary. That is why I think the DA is true, and the fact that everyone tries to refute it just shows wishful thinking.
(And by the way, my birthday is in late June and would work well to predict the length of the year. And my current age is almost exactly half of the current average lifespan of a human.)
I agree about wishful thinking.
Ok, but what if your were Omega, how you could try to escape from DA curse?
One way is to reset own clock, or to reduce number of people to 1, so DA will work, but willnot result in total extinction.
Another way is to hope that some another strange thing like quantum immortality will help “me” to survive.
Wouldn’t that mean surviving alone?
Looks like alone ((( not nice.
But QI favours the worlds there I am more able survive, so may be I will have some kinds of superpowers or will be uploaded. So I will probably able to create several friends, but not many, as it would change probability distribution in DA, and so makes this outcome less probable.
Another option is put me in the situation where my life or death is univocally connected with live of group of people (e.g. if we all in one submarine). In with case we will all survive.
This interpretation of DA puts you in the class of really intelligent observers, who are able to understand statistic, logic etc. It helps us to solve so called reference class problem in most natural way. It helps to exclude animals, unborn children, Neanderthals from the class of beings from which we are randomly chosen.
Unfortunately it shortens most probable time of existence of our class.
The problem with the doomsday argument is that it is a correct assignment of probabilities only if you have the very small amount of information specified in the argument. More information can change your predictions—the prediction you would make if you had less information gets overridden by the prediction that uses all your information.
Let’s use the example of picking a random tree. Suppose you know about the existence of tree-diseases that make trees sick and more likely to die, and you know that some trees are sick and some are healthy. You pick a random tree and it is ten years old and sick. You now should update your prediction of the average tree age toward 10 years, but you cannot expect that you have picked a point near the middle of this tree’s life. Because you know it is sick, you can expect it to die sooner than that.
Well I don’t have any statistics on how long civilizations last. It’s true that DA is a very naive way of estimating, but I think at this time all we can make are very naive estimates.
I think that when I add other information, like my belief in x-risk, the estimates get even worse. It really does feel like our civilization is at it’s peak and it’s all downhill from here. Between how many dangerous technologies we invent, to how much finite resources we are using up, the estimates given by DA certainly feel plausible.
As I commented on your blog, I thought of the argument myself before I heard it from anyone, and I am unlikely to be unique in that, which makes things slightly less bad, since there were probably lots of people (in absolute numbers) who thought of it.
I also tried to rise the problem of DA-aware observers before but never meet any understanding, and now we have 3 people, who seems to be speaking about the same thing.
We could name them (us) double-DA-aware-obserevers. That is the ones who know about DA and also know that DA is applicable only to observers, who knows about DA.
If we apply all the same logic of DA to this small group, we could get even worse results (and also go into eternal recursion). But it will not happened, as Carter in 1983 was part of double-DA-aware-observers, and it was 33 years ago. So even if number of double-DA-aware-observers is growing exponentially, we still may have 10-20 years before the end. (And it is in accordance with some strong AI timing expectations)
If one strong AI replace all humans or solves DA-paradox it will solve DA without total extinction.
I wrote about DA (for group of people who know about DA) here: http://lesswrong.com/lw/mrb/doomsday_argument_map/
It doesn’t influence timing of the possible catastrophe much. Most people who goes deep in the topic have read (and publish) about DA. So we could use number of the articles about DA to get known distribution of DA-aware-observers. I suspect it is exponential. This means that the medium rank-number of DA-aware-observers are near the end of the time line.
The best rebuttal I know here is that the question is not asked randomly but conditionally dependent of my position. That is when I ask “why I am so early?” I know that I am “early”, and I am not random person from all group. So all who are early will be more interested in DA and ask the question more frequently. But does it completely compensate DA?
For example people who was born 1 January may be more interested in question why their birthday is so early in the year. But it doesn’t prevent me using my birthday date to estimate the number of days in a year. (My birthday is on 243th day so the number of days in a year is less than 500 with 50 per cent probability)
I came to the same conclusion: that DA should be applied only to the people who knows about DA. And it makes it worse. There are two ways how to apply it DA-people. Using years and using number of people (rank) who know about DA. The second way is even worse as the number of people is growing exponentially, and so we are near the end of the group of people who knows about DA.
It may not mean extinction, but strong and universally accepted DA rebuttal, or drastic fail of the number of people.
Global catastrophe in next 10-50 years unfortunately seems to be most likely explanation.
(The ideas were also known to Carter in 1983 when he presented anthropic principle and already knew about DA. At this time he was the only person on Earth who knew about DA and he understood its implication for small class of people who knows about DA and it really made him worry about extinction soon. I forget where I read about it.)
I would rather see the doomsday argument as a version of sleeping beauty.
Different people appear to have different opinions on this kind of arguments. To me, the solution appears rather obvious (in restrospect):
If you ask a decision theory about advice on decisions, then there is nothing paradoxical at all, and the answer is just an obvious computation. This tells you that “probability” is the wrong concept in such situations; rather you should ask about “expected utility” only, as this is much more stable under all kind of anthropic arguments.