Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you’d want to take a bet at close to the value of the game in a large but finite setting is a different question entirely.
Well there are two separate points of the St Petersburg paradox. One is the existence of relatively simple distributions that have no mean. It doesn’t converge on any finite value. Another example of such a distribution, which actually occurs in physics, is the Cauchy distribution.
Another, which the original Pascal’s Mugger post was intended to address, was Solomonoff induction. The idealized prediction algorithm used in AIXI. EY demonstrated that if you use it to predict an unbounded value like utility, it doesn’t converge or have a mean.
The second point is just that the paying more than a few bucks to pay the game is silly. Even in a relatively small finite version of it. The probability of losing is very high. Even though it has a positive expected utility. And this holds even if you adjust the payout tables to account for utility != dollars.
You can bite the bullet and say that if the utility is really so high, you really should take that bet. And that’s fine. But I’m not really comfortable betting away everything on such tiny probabilities. You are basically guaranteed to lose and end up worse than not betting.
not even pay a dollar for the mugger not to shoot their mother if they couldn’t see the gun.
You can do a tradeoff between median maximizing and expected utility with mean of quantiles. This basically gives you the best average outcome ignoring incredibly unlikely outcomes. Even median maximizing by itself, which seems terrible, will give you the best possible outcome >50% of the time. The median is fairly robust.
Whereas expected utility could give you a shitty outcome 99% of the time or 99.999% of the time, etc. As long as the outliers are large enough.
Certainly there’s evidence that could convince me, even rather quickly, it’s just that I don’t expect to ever see such evidence.
If you are assigning 1/3^^^3 probability to something, then no amount of evidence will ever convince you.
I’m not saying that unbounded computing power is likely. I’m saying you shouldn’t assign infinitely small probability to it. The universe we live in runs on seemingly infinite computing power. We can’t even simulate the very smallest particles because of how large the number of computations grows.
Maybe someday someone will figure out how to use that computing power. Or even figure out that we could interact with the parent universe that runs us, etc. You shouldn’t use a model that assigns these things 0 probability.
Well there are two separate points of the St Petersburg paradox. One is the existence of relatively simple distributions that have no mean. It doesn’t converge on any finite value. Another example of such a distribution, which actually occurs in physics, is the Cauchy distribution.
Another, which the original Pascal’s Mugger post was intended to address, was Solomonoff induction. The idealized prediction algorithm used in AIXI. EY demonstrated that if you use it to predict an unbounded value like utility, it doesn’t converge or have a mean.
The second point is just that the paying more than a few bucks to pay the game is silly. Even in a relatively small finite version of it. The probability of losing is very high. Even though it has a positive expected utility. And this holds even if you adjust the payout tables to account for utility != dollars.
You can bite the bullet and say that if the utility is really so high, you really should take that bet. And that’s fine. But I’m not really comfortable betting away everything on such tiny probabilities. You are basically guaranteed to lose and end up worse than not betting.
You can do a tradeoff between median maximizing and expected utility with mean of quantiles. This basically gives you the best average outcome ignoring incredibly unlikely outcomes. Even median maximizing by itself, which seems terrible, will give you the best possible outcome >50% of the time. The median is fairly robust.
Whereas expected utility could give you a shitty outcome 99% of the time or 99.999% of the time, etc. As long as the outliers are large enough.
If you are assigning 1/3^^^3 probability to something, then no amount of evidence will ever convince you.
I’m not saying that unbounded computing power is likely. I’m saying you shouldn’t assign infinitely small probability to it. The universe we live in runs on seemingly infinite computing power. We can’t even simulate the very smallest particles because of how large the number of computations grows.
Maybe someday someone will figure out how to use that computing power. Or even figure out that we could interact with the parent universe that runs us, etc. You shouldn’t use a model that assigns these things 0 probability.