Perhaps the fact that we are so confused by anthropic reasoning is a priori evidence that we are a very early anthropic reasoners and thus the Doomsday argument is false. Further, not every human is an anthropic reasoner. If the growth rate of anthropic reasoners is less than the growth rate of humans we should then extend the estimation of the lifespan of a human race with anthropic reasoners (and of course this says nothing about the lifespan of humanity without anthropic reasoners).
A handful of powerful anthropic reasoners could enforce a ban on anthropic reasoning: burning books, prohibiting it’s teaching and silencing those who came to be anthropic reasoners on their own. If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% (I think, anyone have an educated estimate of how many anthropic reasoners there have been up to this point in time?) until a permanent solution was reached or humanity began spreading and we would need at least one enforcer for every colony—but given optimistic longevity scenarios we could still keep the anthropic reasoner population to a minimum. The permanent solution is probably obvious: A singleton could enforce the ban by itself and make itself the last or at least close to last anthropic reasoner in the galaxy.
The above strikes me as obviously insane so there has to be a mistake somewhere, right?
If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% …
That sounds like something Evidential Decision Theory would do, but not Timeless or Updateless Decision Theories. Unless you think that reaching a certain number of anthropic reasoners would cause human extinction.
Hmmm. Yes thats right, as far as I understand those theories at least. I guess my point is that something seems very wrong with an argument that makes predictions but offers nothing in the way of causal regularities whose variables could in principle be manipulated to alter the result. It isn’t even like seen barometer indicate low pressure and then predicting a storm (while not understanding the variable that lead to the correlation of barometers indicating low pressure and storms coming): there isn’t even any causal knowledge involved in the Doomsday argument afaict. Note that this isn’t the case with all anthropic reasoning, it is peculiar to this argument. The only way we know of predicting the future is by knowing earlier conditions and rules governing those conditions over time: the Doomsday argument is thus an entirely knew way of making predictions. This suggests to me something has to be wrong with it.
Maybe the self-indication assumption is the way out, I can’t tell if I would have the same problem with it.
Maybe somebody will just come up with an elegant explanation of the underlying probability theory some time in the next few years, it’ll go viral among the sorts of people who would otherwise have attempted anthropic reasoning, and the whole thing will go the way of geocentrism, but with fewer religiously-motivated defenders.
Perhaps the fact that we are so confused by anthropic reasoning is a priori evidence that we are a very early anthropic reasoners and thus the Doomsday argument is false. Further, not every human is an anthropic reasoner. If the growth rate of anthropic reasoners is less than the growth rate of humans we should then extend the estimation of the lifespan of a human race with anthropic reasoners (and of course this says nothing about the lifespan of humanity without anthropic reasoners).
A handful of powerful anthropic reasoners could enforce a ban on anthropic reasoning: burning books, prohibiting it’s teaching and silencing those who came to be anthropic reasoners on their own. If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% (I think, anyone have an educated estimate of how many anthropic reasoners there have been up to this point in time?) until a permanent solution was reached or humanity began spreading and we would need at least one enforcer for every colony—but given optimistic longevity scenarios we could still keep the anthropic reasoner population to a minimum. The permanent solution is probably obvious: A singleton could enforce the ban by itself and make itself the last or at least close to last anthropic reasoner in the galaxy.
The above strikes me as obviously insane so there has to be a mistake somewhere, right?
That sounds like something Evidential Decision Theory would do, but not Timeless or Updateless Decision Theories. Unless you think that reaching a certain number of anthropic reasoners would cause human extinction.
Hmmm. Yes thats right, as far as I understand those theories at least. I guess my point is that something seems very wrong with an argument that makes predictions but offers nothing in the way of causal regularities whose variables could in principle be manipulated to alter the result. It isn’t even like seen barometer indicate low pressure and then predicting a storm (while not understanding the variable that lead to the correlation of barometers indicating low pressure and storms coming): there isn’t even any causal knowledge involved in the Doomsday argument afaict. Note that this isn’t the case with all anthropic reasoning, it is peculiar to this argument. The only way we know of predicting the future is by knowing earlier conditions and rules governing those conditions over time: the Doomsday argument is thus an entirely knew way of making predictions. This suggests to me something has to be wrong with it.
Maybe the self-indication assumption is the way out, I can’t tell if I would have the same problem with it.
Maybe somebody will just come up with an elegant explanation of the underlying probability theory some time in the next few years, it’ll go viral among the sorts of people who would otherwise have attempted anthropic reasoning, and the whole thing will go the way of geocentrism, but with fewer religiously-motivated defenders.