There is a “natural reference class” for any question X: it is everybody who asks the question X.
In the case of classical anthropic questions like Doomsday Argument such reasoning is very pessimistic, as the class of people who knows about DA is very short and its end is very soon.
Members of the natural reference class could bet on the outcome of X, but the betting result depends on the betting procedure. If betting outcome doesn’t depend on the degree of truth (I am either right or wrong), when we get weird anthropic effects.
Such weird anthropic is net winning in betting: the majority of the members of DA-aware reference class live not in the beginning of the world, and DA may be used to predict the end of the world.
If we take into account the edge cases which produce very false results, this will compensate net winning.
This supposedly “natural” reference class is full of weird edge cases, in the sense that I can’t write an algorithm that finds “everybody who asks the question X”. Firstly “everybody” is not well defined in a world that contains everything from trained monkeys to artificial intelligence’s. And “who asks the question X” is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall into your reference class? Even more edge cases appear with mind uploading, different mental architectures, ect.
If you get a different prediction from taking the reference class of “people” (for some formal definition of “people”) and then updating on the fact that you are wearing blue socks, than you get from the reference class “people wearing blue socks”, then something has gone wrong in your reasoning.
The doomsday argument works by failing to update on anything but a few carefully chosen facts.
Edge cases do not account for the majority of cases (in most cases) :) But for anthropics we need only majority of cases.
I don’t ignore other facts based on nitpicking. The fact needs to have strong, one-to-one causal connection with the computations’ result for not be ignored. The color of my socks is random variable to my opinion about DA, because it doesn’t affect my conclusions.
I personally think on two languages about DA, and the result is the same, so the language is also random variable for this reasoning.
I had that idea at first, but of the people asking the question, only some of them actually know how do anthropics. Others might be able to ask the anthropic question, but have no idea how to solve it, so toss up their hands and ignore the entire issue, in which case it is effectively the same as them never asking it in the first place. Others may make an error in their anthropic reasoning which you know how to avoid; similarly they aren’t in your reference class because their reasoning process is disconnected from yours. Whenever you make a decision, you are implicitly making a bet. Anthropic considerations alter how the bet plays plays out and in so far as you can account for this, you can account for anthropics.
For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn’t change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050.
However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can’t, that is irrelevant here as we are only asking about X. Anyway, I don’t think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time.
“However, before making any anthropic bet, I need to take into account logical uncertainty”—that seems like a reasonable thing to do. However, at this particular time, I’m only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.
There is a “natural reference class” for any question X: it is everybody who asks the question X.
In the case of classical anthropic questions like Doomsday Argument such reasoning is very pessimistic, as the class of people who knows about DA is very short and its end is very soon.
Members of the natural reference class could bet on the outcome of X, but the betting result depends on the betting procedure. If betting outcome doesn’t depend on the degree of truth (I am either right or wrong), when we get weird anthropic effects.
Such weird anthropic is net winning in betting: the majority of the members of DA-aware reference class live not in the beginning of the world, and DA may be used to predict the end of the world.
If we take into account the edge cases which produce very false results, this will compensate net winning.
This supposedly “natural” reference class is full of weird edge cases, in the sense that I can’t write an algorithm that finds “everybody who asks the question X”. Firstly “everybody” is not well defined in a world that contains everything from trained monkeys to artificial intelligence’s. And “who asks the question X” is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall into your reference class? Even more edge cases appear with mind uploading, different mental architectures, ect.
If you get a different prediction from taking the reference class of “people” (for some formal definition of “people”) and then updating on the fact that you are wearing blue socks, than you get from the reference class “people wearing blue socks”, then something has gone wrong in your reasoning.
The doomsday argument works by failing to update on anything but a few carefully chosen facts.
Edge cases do not account for the majority of cases (in most cases) :) But for anthropics we need only majority of cases.
I don’t ignore other facts based on nitpicking. The fact needs to have strong, one-to-one causal connection with the computations’ result for not be ignored. The color of my socks is random variable to my opinion about DA, because it doesn’t affect my conclusions.
I personally think on two languages about DA, and the result is the same, so the language is also random variable for this reasoning.
I had that idea at first, but of the people asking the question, only some of them actually know how do anthropics. Others might be able to ask the anthropic question, but have no idea how to solve it, so toss up their hands and ignore the entire issue, in which case it is effectively the same as them never asking it in the first place. Others may make an error in their anthropic reasoning which you know how to avoid; similarly they aren’t in your reference class because their reasoning process is disconnected from yours. Whenever you make a decision, you are implicitly making a bet. Anthropic considerations alter how the bet plays plays out and in so far as you can account for this, you can account for anthropics.
For any person who actually understands anthropics, there are 10 people who ask questions without understanding (and 0.1 people who know anthropic better) - but it doesn’t change my relative location in the middle. No matter if there are 20 people behind me and 20 ahead – or 200 behind and 200 ahead, – if all of them live in the same time interval, like between 1983 and 2050.
However, before making any anthropic bet, I need to take into account logical uncertainty, that is, the probability that anthropic is not bullshit. I estimate such meta-level uncertainty as 0.5.(wrote more about in the meta-doomsday argument text).
Them knowing anthropics better than you only makes a difference insofar as they utilitise a different algorithm/make decisions in a way that is disconnected from you. For example, if we are discussing anthropics problem X which you can both solve and they can solve Y and Z which you can’t, that is irrelevant here as we are only asking about X. Anyway, I don’t think you can assume that people will be evenly distributed. We might hypothesis, for example, that the level of anthropics knowledge will go up over time.
“However, before making any anthropic bet, I need to take into account logical uncertainty”—that seems like a reasonable thing to do. However, at this particular time, I’m only trying to solve anthropics from the inside view, not from the outside view. The later is valuable, but I prefer to focus on one part of a problem at a time.