Why shouldn’t the probability predictably update for the Self-Indicative Assumption? This version of probability isn’t supposed to refer to a property of the world, but some combination of the state of the world and the number of agents in various scenarios.
Regarding the self-sampling assumption, if you knew ahead of time that the lever would be pulled, then you could have updated before it was pulled. If you didn’t know that the lever would be pulled, then you gained information: specifically about the number of copies in the heads case. It’s not that you knew there was only one copy in the heads world, it’s just that you thought there was only one copy, because you didn’t think a mechanism like the lever would exist and be pulled. In fact, if you knew that the probability of the lever existing and then being pulled was 0, then the scenario would contradict itself.
As Charlie Steiner notes, it looks like you’ve lost information, but that’s only because you’ve gained information that your information is inaccurate. Here’s an analogy: suppose I give you a scientific paper that contains what appears to be lots of valuable information. Next I tell you that the paper is a fraud. It seems like you’ve lost information as you are less certain about the world, but you’ve actually gained it. I’ve written about this before for the Dr Evil problem.
You are deciding whether or not to pull the lever. The probability of a past event, known to be in the past, depends on your actions now.
To use your analogy, it’s you deciding whether to label a scientific paper inaccurate or not—your choice of label, not anything else, makes it inaccurate or not.
Oh, actually I think what I wrote above was wrong. The self-sampling assumption is supposed to preserve probabilities like this. It’s the self-indicative assumption that is relative to agents.
That said, I have a different objection. I’m confused about why pulling the lever would change the odds though. Your reference class is all copies that were the <last copy that was originally created>. So any further clones you create fall outside the reference class.
If you want to set your reference class to <the last copy that was created at any point> then:
Heads case—first round: if you pull the lever then you fall outside the reference class
Heads case—second round: lever doesn’t do anything anymore as it has already been used
Tails case: pulling the lever does nothing
So you don’t really have the option to pull the lever to create clones. If you were using a different reference class, what was it.
Why shouldn’t the probability predictably update for the Self-Indicative Assumption? This version of probability isn’t supposed to refer to a property of the world, but some combination of the state of the world and the number of agents in various scenarios.
Regarding the self-sampling assumption, if you knew ahead of time that the lever would be pulled, then you could have updated before it was pulled. If you didn’t know that the lever would be pulled, then you gained information: specifically about the number of copies in the heads case. It’s not that you knew there was only one copy in the heads world, it’s just that you thought there was only one copy, because you didn’t think a mechanism like the lever would exist and be pulled. In fact, if you knew that the probability of the lever existing and then being pulled was 0, then the scenario would contradict itself.
As Charlie Steiner notes, it looks like you’ve lost information, but that’s only because you’ve gained information that your information is inaccurate. Here’s an analogy: suppose I give you a scientific paper that contains what appears to be lots of valuable information. Next I tell you that the paper is a fraud. It seems like you’ve lost information as you are less certain about the world, but you’ve actually gained it. I’ve written about this before for the Dr Evil problem.
You are deciding whether or not to pull the lever. The probability of a past event, known to be in the past, depends on your actions now.
To use your analogy, it’s you deciding whether to label a scientific paper inaccurate or not—your choice of label, not anything else, makes it inaccurate or not.
Oh, actually I think what I wrote above was wrong. The self-sampling assumption is supposed to preserve probabilities like this. It’s the self-indicative assumption that is relative to agents.
That said, I have a different objection. I’m confused about why pulling the lever would change the odds though. Your reference class is all copies that were the <last copy that was originally created>. So any further clones you create fall outside the reference class.
If you want to set your reference class to <the last copy that was created at any point> then:
Heads case—first round: if you pull the lever then you fall outside the reference class
Heads case—second round: lever doesn’t do anything anymore as it has already been used
Tails case: pulling the lever does nothing
So you don’t really have the option to pull the lever to create clones. If you were using a different reference class, what was it.