If a dust speck in the eye is worse than nothing, and being tortured for 50 years is worse than a dust speck, there must be some probability of being tortured for 50 years which is so small that you are indifferent between that and a certainty of getting a dust speck in the eye? I quite agree!
It’s about continuity and quantitative commensurability of preferences. Aggregating lots of small events is not quite the same balancing method as multiplying a large event by a tiny probability, and I think some people did bite the second bullet but not the first (?!) - but it’s the same basic concept of continuity and quantitative commensurability that lets you compare utility intervals on a common scale and “shut up and multiply”.
The commonality between aggregation and probability for multiplying that is occurring in this case is reasonable, but that lies on the same level as the argument that Psy-Kosh makes in adorable maybes.
The point of this post is that his argument doesn’t just give you continuity. There is some missing step.
Elsewhere in these comments I’m claiming what is missing is actually an additional premise.
The actual effort it requires to consider which option to take is itself probably less pleasant than just taking the dust speck. Of course, this decision cost doesn’t apply to models, but it’s worth considering if you’re thinking of actually being faced with the decision yourself.
If a dust speck in the eye is worse than nothing, and being tortured for 50 years is worse than a dust speck, there must be some probability of being tortured for 50 years which is so small that you are indifferent between that and a certainty of getting a dust speck in the eye? I quite agree!
See you at Penguicon...
Your link deals neither with equality of preferences or with probability. Could you please explain its relevance?
Also, why does this example imply that continuity is generally valid?
It’s about continuity and quantitative commensurability of preferences. Aggregating lots of small events is not quite the same balancing method as multiplying a large event by a tiny probability, and I think some people did bite the second bullet but not the first (?!) - but it’s the same basic concept of continuity and quantitative commensurability that lets you compare utility intervals on a common scale and “shut up and multiply”.
The commonality between aggregation and probability for multiplying that is occurring in this case is reasonable, but that lies on the same level as the argument that Psy-Kosh makes in adorable maybes.
The point of this post is that his argument doesn’t just give you continuity. There is some missing step.
Elsewhere in these comments I’m claiming what is missing is actually an additional premise.
The actual effort it requires to consider which option to take is itself probably less pleasant than just taking the dust speck. Of course, this decision cost doesn’t apply to models, but it’s worth considering if you’re thinking of actually being faced with the decision yourself.