I am trying to understand what people are talking about, precisely, and I asked on LW because people here are more likely to have a precise understanding of the DA than most philosophers.
If my original example takes place in a Big World (e.g. the total population depends on a quantum event that happened long ago), then it seems to me that the SSA doesn’t make the DA go through. Let’s say an urn contains 1 red ball, 1000 yellow balls and 1000000 green balls. Balls of each color are numbered. You draw a ball at random and see that it says “50”, but you’re colorblind and cannot see the color. Then Bayes says you should assign 0 probability to red, 0.5 to yellow, and 0.5 to green, thus the relative probabilities of “worlds compatible with your existence” are unchanged.
So I’m still confused. Does the updating rule used by the DA rely on a fundamental difference between big worlds and small worlds? This looks suspicious, because human decisions shouldn’t change depending on classicalness or quantumness of a coinflip, yet the SSA seems to say they should, by arbitrarily delineating parts of reality as “worlds”. There’s got to be a mistake somewhere.
The implied algorithm is that you first pick a world size s from some distribution, and then pick an index uniformly from 1..s. This corresponds to the case where there are three separate urns with 1 red, 1000 yellow and 10^6 green balls, and you pick from one of the urns without knowing which one it is.
(I find the second part, picking an index uniformly from 1..s, questionable; but there’s only one sample of evidence with which to determine what the right distribution would be, so there’s little point in speculating on it.)
Let’s say an urn contains 1 red ball, 1000 yellow balls and 1000000 green balls. Balls of each color are numbered.
This is not equivalent to the original problem. In the original problem, if there are 1000 people you have a 1/1000 chance of being the 50th and if there are 1,000,000 people you have a 1⁄1,000,000 chance of being the 50th. In your formulation, you have a 1⁄1,001,001 chance of getting each of the balls marked ’50′.
It might be equivalent to have the urn contain one million red balls marked ‘1’, one million yellow balls divided into one thousand sets which are each numbered one through one thousand, and one million green balls numbered one through one million. In this case, if you draw a ball marked ’50′, it can be either the one green ball that’s marked ’50′, or any of the thousand yellow balls marked ‘50’, and the latter case is one thousand times more likely than the former.
Thanks. Carl, jimrandomh and you have helped me understand what the original formulation says about probabilities, but I still can’t understand why it says that. My grandparent comment and its sibling can be interpreted as arguments against the original formulation, what do you think about them?
In general I’m a lousy one to ask about probability; I only noticed this particular thing after a few days of contemplation. I was more hoping that someone else would see it and be able to use it to form a more coherent explanation.
I do think, regarding the sibling, that creating or destroying people is incompatible with assuming that a certain number of people will exist—I expect that a hypothesis that would generate that prediction would have an implicit assumption that nobody is going to be creating or destroying or failing to create people on the basis of the existence of the hypothesis. In other words, causation doesn’t work like that.
Edit: It might help to note that the original point that led me to notice that your formulation was flawed was that the different worlds—represented by the different colors—were not equally likely. If you pick a ball out of your urn and don’t look at the number, it’s much more likely to be green than yellow and very very unlikely to be red. If you pick a ball out of my urn, there’s an even chance of it being any of the three colors.
I am trying to understand what people are talking about, precisely, and I asked on LW because people here are more likely to have a precise understanding of the DA than most philosophers.
If my original example takes place in a Big World (e.g. the total population depends on a quantum event that happened long ago), then it seems to me that the SSA doesn’t make the DA go through. Let’s say an urn contains 1 red ball, 1000 yellow balls and 1000000 green balls. Balls of each color are numbered. You draw a ball at random and see that it says “50”, but you’re colorblind and cannot see the color. Then Bayes says you should assign 0 probability to red, 0.5 to yellow, and 0.5 to green, thus the relative probabilities of “worlds compatible with your existence” are unchanged.
So I’m still confused. Does the updating rule used by the DA rely on a fundamental difference between big worlds and small worlds? This looks suspicious, because human decisions shouldn’t change depending on classicalness or quantumness of a coinflip, yet the SSA seems to say they should, by arbitrarily delineating parts of reality as “worlds”. There’s got to be a mistake somewhere.
The implied algorithm is that you first pick a world size s from some distribution, and then pick an index uniformly from 1..s. This corresponds to the case where there are three separate urns with 1 red, 1000 yellow and 10^6 green balls, and you pick from one of the urns without knowing which one it is.
(I find the second part, picking an index uniformly from 1..s, questionable; but there’s only one sample of evidence with which to determine what the right distribution would be, so there’s little point in speculating on it.)
This is not equivalent to the original problem. In the original problem, if there are 1000 people you have a 1/1000 chance of being the 50th and if there are 1,000,000 people you have a 1⁄1,000,000 chance of being the 50th. In your formulation, you have a 1⁄1,001,001 chance of getting each of the balls marked ’50′.
It might be equivalent to have the urn contain one million red balls marked ‘1’, one million yellow balls divided into one thousand sets which are each numbered one through one thousand, and one million green balls numbered one through one million. In this case, if you draw a ball marked ’50′, it can be either the one green ball that’s marked ’50′, or any of the thousand yellow balls marked ‘50’, and the latter case is one thousand times more likely than the former.
Thanks. Carl, jimrandomh and you have helped me understand what the original formulation says about probabilities, but I still can’t understand why it says that. My grandparent comment and its sibling can be interpreted as arguments against the original formulation, what do you think about them?
In general I’m a lousy one to ask about probability; I only noticed this particular thing after a few days of contemplation. I was more hoping that someone else would see it and be able to use it to form a more coherent explanation.
I do think, regarding the sibling, that creating or destroying people is incompatible with assuming that a certain number of people will exist—I expect that a hypothesis that would generate that prediction would have an implicit assumption that nobody is going to be creating or destroying or failing to create people on the basis of the existence of the hypothesis. In other words, causation doesn’t work like that.
Edit: It might help to note that the original point that led me to notice that your formulation was flawed was that the different worlds—represented by the different colors—were not equally likely. If you pick a ball out of your urn and don’t look at the number, it’s much more likely to be green than yellow and very very unlikely to be red. If you pick a ball out of my urn, there’s an even chance of it being any of the three colors.