Just by observation, it seems that 100% probability simply tends to be weighed slightly more heavily—say, an extra 20%. I’d expect that for most people, there’s a point where they’d take the 99% over the 100%.
Sacrificing a guaranteed thing for an uncertain thing also has a different psychological weight, since if you lose, you now know you’re responsible for that loss—whereas with the 66% vs 67%, you can excuse it as “Well, I probably would have lost anyway”. This one is easily resolved by just modifying the problem so that you know what the result was, and thus if it came up 67 you know it’s your own fault.
100% certainty also has certain magical mathematical properties in Bayesian reasoning—it means there’s absolutely no possible way to update to anything less than 100%, whereas a 99% could later get updated by other evidence. And on the flip side of the coin, it requires infinite evidence to establish 100%, so it shouldn’t really exist to begin with.
The problem with set four is that money really, seriously, does not scale at those levels, and my neurology can’t really comprehend what “a million times the utility of $24K” would mean. If I ask myself “what is the smallest thing I would sacrifice $24K for a one-in-a-million chance at it”, then I’ll either get an answer, assign it that utility value, and take the bet, or find out that my neurology is incapable of evaluating utility on that scale. Either way it breaks the question. (For me, I’d sacrifice $24K for a one-in-a-million chance at a Friendly Singularity that leads to a proper Fun Eutopia)
Just by observation, it seems that 100% probability simply tends to be weighed slightly more heavily—say, an extra 20%. I’d expect that for most people, there’s a point where they’d take the 99% over the 100%.
Sacrificing a guaranteed thing for an uncertain thing also has a different psychological weight, since if you lose, you now know you’re responsible for that loss—whereas with the 66% vs 67%, you can excuse it as “Well, I probably would have lost anyway”. This one is easily resolved by just modifying the problem so that you know what the result was, and thus if it came up 67 you know it’s your own fault.
100% certainty also has certain magical mathematical properties in Bayesian reasoning—it means there’s absolutely no possible way to update to anything less than 100%, whereas a 99% could later get updated by other evidence. And on the flip side of the coin, it requires infinite evidence to establish 100%, so it shouldn’t really exist to begin with.
The problem with set four is that money really, seriously, does not scale at those levels, and my neurology can’t really comprehend what “a million times the utility of $24K” would mean. If I ask myself “what is the smallest thing I would sacrifice $24K for a one-in-a-million chance at it”, then I’ll either get an answer, assign it that utility value, and take the bet, or find out that my neurology is incapable of evaluating utility on that scale. Either way it breaks the question. (For me, I’d sacrifice $24K for a one-in-a-million chance at a Friendly Singularity that leads to a proper Fun Eutopia)