Making up reasons against pascal’s mugging based on ‘it must be wrong’ feeling sounds awful lot like rationalization. One got to stick to really solid logic; only in mathematics can you believe rather strongly in a conjecture, be motivated to prove it, and then make a valid proof.
One man’s decision affecting 3^^^^3 people, got to be very rare occurrence to find yourself in; you’re much more likely to be among those 3^^^^3 . You got to adjust your prior for that. This should be enough for not giving $5
Other issue is that it is a hostage situation, and even in normal human hostage situations, whenever you should, or should not give the money to the hostage holder, depends solely to whenever you have higher probability that the hostages will be killed (or tortured) if money are given, than if money are not given. Without further information about people whom hold 3^^^^3 beings hostage for $5, you can not make any prediction—the expected effect of giving $5 on the suffering of 3^^^^3 beings is 3^^^^3 * 0 = 0, and thus the expected utility of giving $5 is equal to expected utility of not giving $5 , minus the utility of having $5 in your hands rather than in mugger’s hands. It does not matter how many up arrows the mugger stacks; it may well be that on average, giving money gets hostages killed in the situation when the kidnapper is this psychopathic. Then one may estimate that giving the money has immense dis-utility. Caveat: one can imagine the inconvenient world where psychopaths hold their words and release hostages when demands are met.
Other issue is that it is a hostage situation, and even in normal human hostage situations, whenever you should, or should not give the money to the hostage holder, depends solely to whenever you have higher probability that the hostages will be killed (or tortured) if money are given, than if money are not given.
Actually in real life, we also consider the future consequences of unspecified potential future hostage takers who may be motivated to take hostages if they see a hostage taker paid off. This is ostensibly why the USG will not (directly) pay off a hostage taker.
Also, we have to consider the value of the money, and our next best alternative to saving a hostages life. For example, if Dr. Evil is holding a hostage (doesn’t matter who) for $1B, and you know we will not catch him if you pay him off, then you should probably just let him execute the hostage and use the money to buy food for a few thousand starving people somewhere who are just as desperate.
I sincerely hope that I’m not making up reasons against Pascal’s mugging based on feeling “it must be wrong”. Can’t help but agree though on the requirement for mathematics. I’ve done my best here to keep things as clean and logical as possible, though if I’ve lapsed somewhere I can’t tell; would you mind pointing me to it?
Yes, I believe that was the solution put forth by Robin Hanson. It seems to be overly specific, the same calculations should apply if it were a coin flip effecting the lives of 3^^^^3 rather than a person. That’s part of what motivated me to come up with this, I just wasn’t satisfied with the current answer.
1: Well, part of the issue is that feelings can very well be right. The feeling is that the claim is too outrageous, that’s a genuine thing but it is too hard to pin any probabilities onto.
One would think that the probability should fall off with the outrageousness of the claim, super-linearly. I.e. suppose that no claim is made; you are to give, or not to give, $5, to a random person who have not claimed that $5 will save 3^^^3 people. It is clear enough that the probability of this $5 saving 3^^^3 people got to be very small then, and would fall off with the claimed number, it’s reasonable that it would fall off super-linearly. Then, the person making that claim is just a piece of evidence that can’t boost the prior probability by whole lot; see the posts here on Bayesian statistics. Indeed, if one is to give to mugger $5, one should give $5 to people who didn’t even ask for money.
Actually, to think about it, i might’ve just nailed it and also nailed the problem with using probabilistic reasoning in practice. You can easily pick some random hypothesis out of enormously huge space, which gives it very small prior, but then you forget about this enormous space.
2: I don’t see how it’s overly specific. If we consider (coin or a person), one randomly chosen (coin or a person) affecting 3^^^^3 (coin or a person) is unlikely. Still, the explanation is indeed somewhat problematic.
Actually, to think about it, i might’ve just nailed it and also nailed the problem with using probabilistic reasoning in practice. You can easily pick some random hypothesis out of enormously huge space, which gives it very small prior, but then you forget about this enormous space.
You might like to read this post, “Privileging the hypothesis.”
2: I don’t see how it’s overly specific. If we consider (coin or a person), one randomly chosen (coin or a person) affecting 3^^^^3 (coin or a person) is unlikely. Still, the explanation is indeed somewhat problematic.
It assumes a particular account of anthropic reasoning with infinite certainty. If you get your anthropic hypotheses out of something like Solmonoff induction (the programs best approximating our sense inputs can be thought of as a combination of a simulation of our world plus a bit of code that acts as an “anthropic theory” and reads out part of the simulation as our sense inputs), then things like SIA, SSA, and “you’re more likely to be a given person if they have more causal influence” are not radically different in complexity. So you get Pascal’s mugging from the combination of 1) laws of physics allowing vast quantities of computation and 2) some kind of anthropic theory that makes it unlikely you are one of the mass of simulations.
Hypothesis: Yea. The problem is that apart from the trivial cases having to do with clearly made up nonsense, it is very difficult to track how much the hypothesis got ‘cherrypicked’, as the process of choosing a hypothesis, when not entirely insane, should increase probability of it being true over the hypotheses that this process did not pick up.
Anthropic reasoning: I agree its kind of flimsy. On second thought I don’t like this argument too much.
1: Well my reasoning is that the more people the mugger threatens to kill, the less likely his claims are to be true. In the same way that if I were to claim that a row of 3^^^^3 coins would all turn up heads; it would be far less likely to come true than if I predicted two coins would come up heads. At least that’s what I’m trying to get across in this post.
2: It seems overly specific to me because it seems like a bit too much of a hack, if you get my meaning?
As you see more and more heads, you become increasingly convinced the coins are biased. What’s the bias? With what probability p will a given flip come up heads? At the start you assign some mass to p=1, and some to lesser biases. After 10^1000 heads you can basically ignore the possibility that the coins are fair, and most of the weight you might have initially placed on a minor bias. Going from 10^1000 to 3^^^^3 coins, you will get to clobber hypotheses like “p=1-10^2000”, but you will get no evidence whatsoever against “p=1″. So as long as you assigned any non-infinitesimal, non-gerrymandered credence to p=1 at the start, longer sequences can’t get probabilities approaching zero.
True, but that is as you see more heads. You can’t actually update your value for p based on evidence you haven’t seen yet, longer sequences would still have probabilities approaching zero.
Can someone let me know why this has negative votes please? Thanks.
Can someone let me know why this has negative votes please? Thanks.
Because its likes/dislikes not votes. The number of dislikes is greater than number of likes by 1. That being said, as the estimate of bias in coin increases, so does your likehood of future throws being HHHHH. Not sure I understand what is your point.
Hover over the thumbs-up / thumbs-down icons, they say “Vote up” and “Vote down”. Anyway I was wondering what it was that I’d said that was wrong and thus deserved to be voted down.
Yes, I agree. However what I was trying to point out is that if you start off with no evidence of the coin being biased then you estimate of the bias won’t increase before you start flipping coins.
By the same merits, your estimate of how likely it is the mugger will kill x number of people won’t change because every person he kills is evidence toward him killing them all successfully as you’re making the prediction before he does anything. If you read the above comments I believe it makes sense in context.
Hover over the thumbs-up / thumbs-down icons, they say “Vote up” and “Vote down”.
We have a saying in Russian, along the lines of ′ the wall of a shed says [certain swearword common in graffiti, refers to a reproductive organ] but this body part is not present inside the shed ′ . edit: anyhow, i kind of don’t see anything wrong about what you said.
Well, few things to note:
Making up reasons against pascal’s mugging based on ‘it must be wrong’ feeling sounds awful lot like rationalization. One got to stick to really solid logic; only in mathematics can you believe rather strongly in a conjecture, be motivated to prove it, and then make a valid proof.
One man’s decision affecting 3^^^^3 people, got to be very rare occurrence to find yourself in; you’re much more likely to be among those 3^^^^3 . You got to adjust your prior for that. This should be enough for not giving $5
Other issue is that it is a hostage situation, and even in normal human hostage situations, whenever you should, or should not give the money to the hostage holder, depends solely to whenever you have higher probability that the hostages will be killed (or tortured) if money are given, than if money are not given. Without further information about people whom hold 3^^^^3 beings hostage for $5, you can not make any prediction—the expected effect of giving $5 on the suffering of 3^^^^3 beings is 3^^^^3 * 0 = 0, and thus the expected utility of giving $5 is equal to expected utility of not giving $5 , minus the utility of having $5 in your hands rather than in mugger’s hands. It does not matter how many up arrows the mugger stacks; it may well be that on average, giving money gets hostages killed in the situation when the kidnapper is this psychopathic. Then one may estimate that giving the money has immense dis-utility. Caveat: one can imagine the inconvenient world where psychopaths hold their words and release hostages when demands are met.
Actually in real life, we also consider the future consequences of unspecified potential future hostage takers who may be motivated to take hostages if they see a hostage taker paid off. This is ostensibly why the USG will not (directly) pay off a hostage taker.
Also, we have to consider the value of the money, and our next best alternative to saving a hostages life. For example, if Dr. Evil is holding a hostage (doesn’t matter who) for $1B, and you know we will not catch him if you pay him off, then you should probably just let him execute the hostage and use the money to buy food for a few thousand starving people somewhere who are just as desperate.
Yep. Well, those aspects of it are not so relevant to the 3^^^3 case as they don’t scale with N
Thanks for the feedback!
I sincerely hope that I’m not making up reasons against Pascal’s mugging based on feeling “it must be wrong”. Can’t help but agree though on the requirement for mathematics. I’ve done my best here to keep things as clean and logical as possible, though if I’ve lapsed somewhere I can’t tell; would you mind pointing me to it?
Yes, I believe that was the solution put forth by Robin Hanson. It seems to be overly specific, the same calculations should apply if it were a coin flip effecting the lives of 3^^^^3 rather than a person. That’s part of what motivated me to come up with this, I just wasn’t satisfied with the current answer.
1: Well, part of the issue is that feelings can very well be right. The feeling is that the claim is too outrageous, that’s a genuine thing but it is too hard to pin any probabilities onto.
One would think that the probability should fall off with the outrageousness of the claim, super-linearly. I.e. suppose that no claim is made; you are to give, or not to give, $5, to a random person who have not claimed that $5 will save 3^^^3 people. It is clear enough that the probability of this $5 saving 3^^^3 people got to be very small then, and would fall off with the claimed number, it’s reasonable that it would fall off super-linearly. Then, the person making that claim is just a piece of evidence that can’t boost the prior probability by whole lot; see the posts here on Bayesian statistics. Indeed, if one is to give to mugger $5, one should give $5 to people who didn’t even ask for money.
Actually, to think about it, i might’ve just nailed it and also nailed the problem with using probabilistic reasoning in practice. You can easily pick some random hypothesis out of enormously huge space, which gives it very small prior, but then you forget about this enormous space.
2: I don’t see how it’s overly specific. If we consider (coin or a person), one randomly chosen (coin or a person) affecting 3^^^^3 (coin or a person) is unlikely. Still, the explanation is indeed somewhat problematic.
You might like to read this post, “Privileging the hypothesis.”
It assumes a particular account of anthropic reasoning with infinite certainty. If you get your anthropic hypotheses out of something like Solmonoff induction (the programs best approximating our sense inputs can be thought of as a combination of a simulation of our world plus a bit of code that acts as an “anthropic theory” and reads out part of the simulation as our sense inputs), then things like SIA, SSA, and “you’re more likely to be a given person if they have more causal influence” are not radically different in complexity. So you get Pascal’s mugging from the combination of 1) laws of physics allowing vast quantities of computation and 2) some kind of anthropic theory that makes it unlikely you are one of the mass of simulations.
Hypothesis: Yea. The problem is that apart from the trivial cases having to do with clearly made up nonsense, it is very difficult to track how much the hypothesis got ‘cherrypicked’, as the process of choosing a hypothesis, when not entirely insane, should increase probability of it being true over the hypotheses that this process did not pick up.
Anthropic reasoning: I agree its kind of flimsy. On second thought I don’t like this argument too much.
1: Well my reasoning is that the more people the mugger threatens to kill, the less likely his claims are to be true. In the same way that if I were to claim that a row of 3^^^^3 coins would all turn up heads; it would be far less likely to come true than if I predicted two coins would come up heads. At least that’s what I’m trying to get across in this post.
2: It seems overly specific to me because it seems like a bit too much of a hack, if you get my meaning?
As you see more and more heads, you become increasingly convinced the coins are biased. What’s the bias? With what probability p will a given flip come up heads? At the start you assign some mass to p=1, and some to lesser biases. After 10^1000 heads you can basically ignore the possibility that the coins are fair, and most of the weight you might have initially placed on a minor bias. Going from 10^1000 to 3^^^^3 coins, you will get to clobber hypotheses like “p=1-10^2000”, but you will get no evidence whatsoever against “p=1″. So as long as you assigned any non-infinitesimal, non-gerrymandered credence to p=1 at the start, longer sequences can’t get probabilities approaching zero.
True, but that is as you see more heads. You can’t actually update your value for p based on evidence you haven’t seen yet, longer sequences would still have probabilities approaching zero.
Can someone let me know why this has negative votes please? Thanks.
Because its likes/dislikes not votes. The number of dislikes is greater than number of likes by 1. That being said, as the estimate of bias in coin increases, so does your likehood of future throws being HHHHH. Not sure I understand what is your point.
Hover over the thumbs-up / thumbs-down icons, they say “Vote up” and “Vote down”. Anyway I was wondering what it was that I’d said that was wrong and thus deserved to be voted down.
Yes, I agree. However what I was trying to point out is that if you start off with no evidence of the coin being biased then you estimate of the bias won’t increase before you start flipping coins.
By the same merits, your estimate of how likely it is the mugger will kill x number of people won’t change because every person he kills is evidence toward him killing them all successfully as you’re making the prediction before he does anything. If you read the above comments I believe it makes sense in context.
We have a saying in Russian, along the lines of ′ the wall of a shed says [certain swearword common in graffiti, refers to a reproductive organ] but this body part is not present inside the shed ′ . edit: anyhow, i kind of don’t see anything wrong about what you said.