This supposedly “natural” reference class is full of weird edge cases, in the sense that I can’t write an algorithm that finds “everybody who asks the question X”. Firstly “everybody” is not well defined in a world that contains everything from trained monkeys to artificial intelligence’s. And “who asks the question X” is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall into your reference class? Even more edge cases appear with mind uploading, different mental architectures, ect.
If you get a different prediction from taking the reference class of “people” (for some formal definition of “people”) and then updating on the fact that you are wearing blue socks, than you get from the reference class “people wearing blue socks”, then something has gone wrong in your reasoning.
The doomsday argument works by failing to update on anything but a few carefully chosen facts.
Edge cases do not account for the majority of cases (in most cases) :) But for anthropics we need only majority of cases.
I don’t ignore other facts based on nitpicking. The fact needs to have strong, one-to-one causal connection with the computations’ result for not be ignored. The color of my socks is random variable to my opinion about DA, because it doesn’t affect my conclusions.
I personally think on two languages about DA, and the result is the same, so the language is also random variable for this reasoning.
This supposedly “natural” reference class is full of weird edge cases, in the sense that I can’t write an algorithm that finds “everybody who asks the question X”. Firstly “everybody” is not well defined in a world that contains everything from trained monkeys to artificial intelligence’s. And “who asks the question X” is under-defined as there is no hard boundary between a different way of phrasing the same question and slightly different questions. Does someone considering the argument in chinese fall into your reference class? Even more edge cases appear with mind uploading, different mental architectures, ect.
If you get a different prediction from taking the reference class of “people” (for some formal definition of “people”) and then updating on the fact that you are wearing blue socks, than you get from the reference class “people wearing blue socks”, then something has gone wrong in your reasoning.
The doomsday argument works by failing to update on anything but a few carefully chosen facts.
Edge cases do not account for the majority of cases (in most cases) :) But for anthropics we need only majority of cases.
I don’t ignore other facts based on nitpicking. The fact needs to have strong, one-to-one causal connection with the computations’ result for not be ignored. The color of my socks is random variable to my opinion about DA, because it doesn’t affect my conclusions.
I personally think on two languages about DA, and the result is the same, so the language is also random variable for this reasoning.