The following is copypasted from some stream-of-consciousness-style writing from my own experimental wave/blog/journal, so it may be kinda messy. If this gets upvoted, I might take the time to clean it up some more. The first part of this is entirely skippable.
(skippable part starts here)
I just read this LW post. I think the whole argument is silly. But I still haven’t figured out how to explain the reasons clearly enough to post a comment about it. I’ll try to write about it here.
Some people have posted objections to it in the comments, but so far none that clearly show the problem.
This is basically the same problem as with the doomsday argument, and anthropics in general. Generalizing from one example. Or, more accurately, trying to do statistics with a sample size of 1. Improbable events happen. If people experiencing this improbable event try to do anthropic reasoning about it, then they will conclude that “we just happened to be in this improbable category” is improbable, and therefore they are probably in some other, more probable category that gives the same results. And they would be right. They probably are in the more probable category. But some observers really are in that improbable category. And if they take actions assuming that they are in the more probable category, and not the improbable category, then they will be worse off as a result. But that won’t be because they made a mistake in the math, it will be because they just happened to be in the more improbable category, and therefore any actions they take assuming that they are in the more probable category will be suboptimal.
Sorry, the above was confusing. I should rewrite it using specific examples, not general descriptions.
(skippable part ends here)
One standard example is the Doomsday Argument: It would be improbable for us to find ourselves in a low-population, pre-Singularity past, if there will be a future containing many orders of magnitude more observers. The conclusion of the doomsday argument is that there probably is no post-Singularity future, and that humanity will probably soon go extinct. And yes, that is what “the math” says. But it would be an extremely bad idea to assume that doomsday will inevitably come soon, and therefore there’s no point in trying to do anything to prevent it. The math says that it’s improbable to find yourself as one of the few people before the Singularity. The math doesn’t say that it’s impossible. There are still some people who will just happen to find themselves alive before the Singularity, and it would be a tragedy of epic proportions if these people, upon recognizing that their current situation is improbable, decide that there’s no point trying to help make sure the Singularity happens, and turns out okay for everyone involved.
The same applies to the Simulation Argument: If there is a post-Singularity future that contains lots of ancestor simulations, then it would be improbable for us to find ourselves in the real pre-Singularity universe, rather than one of these ancestor simulations. And yes, that is what “the math” says. But it would be a tragedy of epic proportions to assume that you must inevitably be in one of these simulations, and therefore there’s no point in trying to help make sure the Singularity happens, and turns out okay for everyone involved. Oh, and it would also be a good idea to try to prevent any ancestor simulations from being created in the future. Or at least that’s my opinion, as someone who doesn’t want to be in an ancestor simulation.
So, now how does all this apply to that LW post? Oh, right, assuming that animals are probably not conscious. The math is less clear in this case, but even if the math turns out to be correct, it would still be a bad idea to forget about that word “probably”. It would still be tragic to guess wrong about whether animals are conscious, and treat them cruelly for your own benefit as a result. And, as some commenters pointed out, the probability of guessing wrong is quite high. And so:
(probability that animals are conscious) x (suffering caused by treating animals cruelly) > (probability that animals are not conscious) x (minor inconveniences to yourself caused by not treating animals cruelly)
Peter Thiel uses similar arguments about investing for the future: if it all goes bust, then your investments don’t matter either way, but if it turns out okay, then you win big. No down side vs. huge up side: invest.
The following is copypasted from some stream-of-consciousness-style writing from my own experimental wave/blog/journal, so it may be kinda messy. If this gets upvoted, I might take the time to clean it up some more. The first part of this is entirely skippable.
(skippable part starts here)
I just read this LW post. I think the whole argument is silly. But I still haven’t figured out how to explain the reasons clearly enough to post a comment about it. I’ll try to write about it here.
Some people have posted objections to it in the comments, but so far none that clearly show the problem.
This is basically the same problem as with the doomsday argument, and anthropics in general. Generalizing from one example. Or, more accurately, trying to do statistics with a sample size of 1. Improbable events happen. If people experiencing this improbable event try to do anthropic reasoning about it, then they will conclude that “we just happened to be in this improbable category” is improbable, and therefore they are probably in some other, more probable category that gives the same results. And they would be right. They probably are in the more probable category. But some observers really are in that improbable category. And if they take actions assuming that they are in the more probable category, and not the improbable category, then they will be worse off as a result. But that won’t be because they made a mistake in the math, it will be because they just happened to be in the more improbable category, and therefore any actions they take assuming that they are in the more probable category will be suboptimal.
Sorry, the above was confusing. I should rewrite it using specific examples, not general descriptions.
(skippable part ends here)
One standard example is the Doomsday Argument: It would be improbable for us to find ourselves in a low-population, pre-Singularity past, if there will be a future containing many orders of magnitude more observers. The conclusion of the doomsday argument is that there probably is no post-Singularity future, and that humanity will probably soon go extinct. And yes, that is what “the math” says. But it would be an extremely bad idea to assume that doomsday will inevitably come soon, and therefore there’s no point in trying to do anything to prevent it. The math says that it’s improbable to find yourself as one of the few people before the Singularity. The math doesn’t say that it’s impossible. There are still some people who will just happen to find themselves alive before the Singularity, and it would be a tragedy of epic proportions if these people, upon recognizing that their current situation is improbable, decide that there’s no point trying to help make sure the Singularity happens, and turns out okay for everyone involved.
The same applies to the Simulation Argument: If there is a post-Singularity future that contains lots of ancestor simulations, then it would be improbable for us to find ourselves in the real pre-Singularity universe, rather than one of these ancestor simulations. And yes, that is what “the math” says. But it would be a tragedy of epic proportions to assume that you must inevitably be in one of these simulations, and therefore there’s no point in trying to help make sure the Singularity happens, and turns out okay for everyone involved. Oh, and it would also be a good idea to try to prevent any ancestor simulations from being created in the future. Or at least that’s my opinion, as someone who doesn’t want to be in an ancestor simulation.
So, now how does all this apply to that LW post? Oh, right, assuming that animals are probably not conscious. The math is less clear in this case, but even if the math turns out to be correct, it would still be a bad idea to forget about that word “probably”. It would still be tragic to guess wrong about whether animals are conscious, and treat them cruelly for your own benefit as a result. And, as some commenters pointed out, the probability of guessing wrong is quite high. And so:
(probability that animals are conscious) x (suffering caused by treating animals cruelly) > (probability that animals are not conscious) x (minor inconveniences to yourself caused by not treating animals cruelly)
Or at least that’s my guess. I could be wrong.
Peter Thiel uses similar arguments about investing for the future: if it all goes bust, then your investments don’t matter either way, but if it turns out okay, then you win big. No down side vs. huge up side: invest.