“The procedure you’re proposing collapses into approval voting immediately.”
This only holds when you already know the outcome of every other vote, and not otherwise. (Of course, in the real world, you don’t normally know the outcome of every other vote). Suppose, for instance, that you have three possible outcomes, A, B and C, where A is “you win a thousand dollars”, B is “you get nothing”, and C is “you lose a thousand dollars”. Suppose that (as a simple case) you know that there’s a 50% chance of the other scores being (92, 93, 90) and a 50% chance of the other scores being (88, 92, 99).
If you vote 10, 0, 0, then 50% of the time you win a thousand dollars, and 50% of the time you lose a thousand dollars, for a net expected value of $0. If you vote 10, 10, 0, you always get nothing, for a net expected value of $0. If you vote 10, 8, 0, however, you win $1000 50% of the time and get nothing 50% of the time, for a total expected value of $500.
taw is correct for most realistic situations. If there is a large voting population, and your probability distribution over vote outcomes is pretty smooth, then the marginal expected utility of each +1 vote shouldn’t change that much as a result of your miniscule contribution. In that case, if you vote anything other than 0, you may as well vote 10.
Doesn’t this assume that you’re a causal decision theorist?
No. (That is, to make taw incorrect you have to assume much more than that you are not a CDT agent. For example, making assumptions about what the other agents are, what they want and what they know about you.)
It seems to me that the same sort of decision-theoretic considerations that motivate me to vote at all in a large election would make it valid for me to vote my actual (relative) preference weighting in that election.
That’s true, in the limit as the number of voters goes to infinity, if you only care about which preference is ranked the highest. However, there are numerous circumstances where this isn’t the case.
Specifically, it isn’t the case if you believe that the disparity between potential winners is smaller than the maximum vote number minus the minimum and that other people do not believe the same and adjust their votes accordingly.
“The procedure you’re proposing collapses into approval voting immediately.”
This only holds when you already know the outcome of every other vote, and not otherwise. (Of course, in the real world, you don’t normally know the outcome of every other vote). Suppose, for instance, that you have three possible outcomes, A, B and C, where A is “you win a thousand dollars”, B is “you get nothing”, and C is “you lose a thousand dollars”. Suppose that (as a simple case) you know that there’s a 50% chance of the other scores being (92, 93, 90) and a 50% chance of the other scores being (88, 92, 99).
If you vote 10, 0, 0, then 50% of the time you win a thousand dollars, and 50% of the time you lose a thousand dollars, for a net expected value of $0. If you vote 10, 10, 0, you always get nothing, for a net expected value of $0. If you vote 10, 8, 0, however, you win $1000 50% of the time and get nothing 50% of the time, for a total expected value of $500.
taw is correct for most realistic situations. If there is a large voting population, and your probability distribution over vote outcomes is pretty smooth, then the marginal expected utility of each +1 vote shouldn’t change that much as a result of your miniscule contribution. In that case, if you vote anything other than 0, you may as well vote 10.
Doesn’t this assume that you’re a causal decision theorist?
No. (That is, to make taw incorrect you have to assume much more than that you are not a CDT agent. For example, making assumptions about what the other agents are, what they want and what they know about you.)
It seems to me that the same sort of decision-theoretic considerations that motivate me to vote at all in a large election would make it valid for me to vote my actual (relative) preference weighting in that election.
That’s true, in the limit as the number of voters goes to infinity, if you only care about which preference is ranked the highest. However, there are numerous circumstances where this isn’t the case.
Specifically, it isn’t the case if you believe that the disparity between potential winners is smaller than the maximum vote number minus the minimum and that other people do not believe the same and adjust their votes accordingly.