Lets say people can’t actually tell the difference (in a one-shot trial) between experiencing 100 utility points of goodness, and only 99 utility points.
I’m assuming you don’t mean there is literally no difference between the cases (which would make the answer obvious), but rather that people would be slightly happier in the Q100 case vs. the counterfactual Q99 case. They won’t reliably be able to tell the difference, but there would be a small chance of any individual noticing the improvement. Still, if you multiply that epsilon by 1M trials, you get a noticeable effect.
I’m not sure if a barely-noticeable-difference in ice cream flavor is on the order of 1% of the total utility of a serving of ice cream, but I’m pretty confident that even if it was an order of magnitude less, you’d still be better off creating the 1M x 100 world than the (1M + 1) x (100 - ε) world.
Perhaps I am mudding the waters too much. I agree with your logic, and with your conclusions. I agree you are better off taking the option where you serve one less person to increase the total payoff.
What I was trying to say in the original post is that for most things there is the thing itself, and the measurement of the thing. For example maybe noisy thermometers are off from the actual temperature by some random variance. Things feel slightly more suspicious for utility, because the measure of the thing kind of is the thing itself, the split between the actual value and the measured value feels less defensible.
Side note: This seems like a completely different topic from your top level comment. Kind of weird to start a mostly tangential argument inside an unresolved argument thread.
You’d still be better off creating the 1M x 100 world than the (1M + 1) x (100 - ε) world.
Where does (1M + 1) come from?
In the post Ben mentions that manufacturer doing hundreds of experiments, not millions. Of course, in the limiting case the smallest quality drop can and will be observed, but I believe Ben is not talking about that.
Even we use the 1M base figure, it doesn’t explain why it is +1 rather than e.g. +1000
You are assuming that the icecream manufacturer is trying to maximise aggregate utility, which seems obviously false to me.
Both of my comments were about the thought experiment at the end of the post:
You are given a moral dilemma, either a million people will get an experience worth 100 utility points each, or a million + 1 people will get 99 utility points each. The first option gets you more utility total, but if we take the second option we get one more person served and nobody else can even tell the difference.
You start the thought experiment with this:
I’m assuming you don’t mean there is literally no difference between the cases (which would make the answer obvious), but rather that people would be slightly happier in the Q100 case vs. the counterfactual Q99 case. They won’t reliably be able to tell the difference, but there would be a small chance of any individual noticing the improvement. Still, if you multiply that epsilon by 1M trials, you get a noticeable effect.
I’m not sure if a barely-noticeable-difference in ice cream flavor is on the order of 1% of the total utility of a serving of ice cream, but I’m pretty confident that even if it was an order of magnitude less, you’d still be better off creating the 1M x 100 world than the (1M + 1) x (100 - ε) world.
Perhaps I am mudding the waters too much. I agree with your logic, and with your conclusions. I agree you are better off taking the option where you serve one less person to increase the total payoff.
What I was trying to say in the original post is that for most things there is the thing itself, and the measurement of the thing. For example maybe noisy thermometers are off from the actual temperature by some random variance. Things feel slightly more suspicious for utility, because the measure of the thing kind of is the thing itself, the split between the actual value and the measured value feels less defensible.
Side note: This seems like a completely different topic from your top level comment. Kind of weird to start a mostly tangential argument inside an unresolved argument thread.
Where does (1M + 1) come from?
In the post Ben mentions that manufacturer doing hundreds of experiments, not millions. Of course, in the limiting case the smallest quality drop can and will be observed, but I believe Ben is not talking about that.
Even we use the 1M base figure, it doesn’t explain why it is +1 rather than e.g. +1000
You are assuming that the icecream manufacturer is trying to maximise aggregate utility, which seems obviously false to me.
Both of my comments were about the thought experiment at the end of the post:
I apologize. Should have searched before talking.