The procedure you’re proposing collapses into approval voting immediately.
Nobody has any reason to vote anything else than 10 (possibly minus epsilon) or 0 (possibly plus epsilon). If you like A slightly more than B, and estimate A will get 81 points, while B will get 90 points, the optimal behaviour is to vote 10 for A (and everything better than A), and 0 for B (and everything worse than B), any other vote is strictly worse for you. There is no scenario under which voting anything else than 0 and 10 is better than extreme votes.
Approval voting is prone to tactical voting (this should added to requirements, Arrow’s Theorem talks about preferences, as it assumes voting is uniquely determined by them) - you never need to order candidates in a way that reverses your preferences, but approval threshold depends on what you think others will vote like. If you think A > B > C, and think C is leading, you vote 10,10,0. If you think B is leading, you vote 10,0,0. It also fails at determinism.
If we give people predefined allowance of points (like n alternatives need scores 1 to n, or get n points for arbitrary distribution) it fails independence immediately, and is prone to preference reversal in tactical voting, which is even worse (if A > B > C, and C is leading, but B has some chances, you vote 0, 10, 0, with A < B).
“The procedure you’re proposing collapses into approval voting immediately.”
This only holds when you already know the outcome of every other vote, and not otherwise. (Of course, in the real world, you don’t normally know the outcome of every other vote). Suppose, for instance, that you have three possible outcomes, A, B and C, where A is “you win a thousand dollars”, B is “you get nothing”, and C is “you lose a thousand dollars”. Suppose that (as a simple case) you know that there’s a 50% chance of the other scores being (92, 93, 90) and a 50% chance of the other scores being (88, 92, 99).
If you vote 10, 0, 0, then 50% of the time you win a thousand dollars, and 50% of the time you lose a thousand dollars, for a net expected value of $0. If you vote 10, 10, 0, you always get nothing, for a net expected value of $0. If you vote 10, 8, 0, however, you win $1000 50% of the time and get nothing 50% of the time, for a total expected value of $500.
taw is correct for most realistic situations. If there is a large voting population, and your probability distribution over vote outcomes is pretty smooth, then the marginal expected utility of each +1 vote shouldn’t change that much as a result of your miniscule contribution. In that case, if you vote anything other than 0, you may as well vote 10.
Doesn’t this assume that you’re a causal decision theorist?
No. (That is, to make taw incorrect you have to assume much more than that you are not a CDT agent. For example, making assumptions about what the other agents are, what they want and what they know about you.)
It seems to me that the same sort of decision-theoretic considerations that motivate me to vote at all in a large election would make it valid for me to vote my actual (relative) preference weighting in that election.
That’s true, in the limit as the number of voters goes to infinity, if you only care about which preference is ranked the highest. However, there are numerous circumstances where this isn’t the case.
Specifically, it isn’t the case if you believe that the disparity between potential winners is smaller than the maximum vote number minus the minimum and that other people do not believe the same and adjust their votes accordingly.
Relevant section from Steve Rayhawk’s nursery effect link:
If an 0-99 range voter has given 0s and 99s to the candidates she considers most likely to win, and now asks herself “how should I score the remaining no-hope candidates?”, the strategic payoff for exaggerating to give them 0s or 99s can easily be extremely small, because the probability of that causing or preventing them winning can easily be below 10^-100. Supposing a voter gets even a single molecule worth of “happiness neurotransmitter” from voting honestly on these candidates, that happiness-payoff is worth more to that voter than the expected payoff from exaggerating about these candidates via “approval style” range-voting. Therefore, range voters will often cast substantially-honest range votes, even those inclined to be “strategic.”
The procedure you’re proposing collapses into approval voting immediately.
Nobody has any reason to vote anything else than 10 (possibly minus epsilon) or 0 (possibly plus epsilon). If you like A slightly more than B, and estimate A will get 81 points, while B will get 90 points, the optimal behaviour is to vote 10 for A (and everything better than A), and 0 for B (and everything worse than B), any other vote is strictly worse for you. There is no scenario under which voting anything else than 0 and 10 is better than extreme votes.
Approval voting is prone to tactical voting (this should added to requirements, Arrow’s Theorem talks about preferences, as it assumes voting is uniquely determined by them) - you never need to order candidates in a way that reverses your preferences, but approval threshold depends on what you think others will vote like. If you think A > B > C, and think C is leading, you vote 10,10,0. If you think B is leading, you vote 10,0,0. It also fails at determinism.
If we give people predefined allowance of points (like n alternatives need scores 1 to n, or get n points for arbitrary distribution) it fails independence immediately, and is prone to preference reversal in tactical voting, which is even worse (if A > B > C, and C is leading, but B has some chances, you vote 0, 10, 0, with A < B).
“The procedure you’re proposing collapses into approval voting immediately.”
This only holds when you already know the outcome of every other vote, and not otherwise. (Of course, in the real world, you don’t normally know the outcome of every other vote). Suppose, for instance, that you have three possible outcomes, A, B and C, where A is “you win a thousand dollars”, B is “you get nothing”, and C is “you lose a thousand dollars”. Suppose that (as a simple case) you know that there’s a 50% chance of the other scores being (92, 93, 90) and a 50% chance of the other scores being (88, 92, 99).
If you vote 10, 0, 0, then 50% of the time you win a thousand dollars, and 50% of the time you lose a thousand dollars, for a net expected value of $0. If you vote 10, 10, 0, you always get nothing, for a net expected value of $0. If you vote 10, 8, 0, however, you win $1000 50% of the time and get nothing 50% of the time, for a total expected value of $500.
taw is correct for most realistic situations. If there is a large voting population, and your probability distribution over vote outcomes is pretty smooth, then the marginal expected utility of each +1 vote shouldn’t change that much as a result of your miniscule contribution. In that case, if you vote anything other than 0, you may as well vote 10.
Doesn’t this assume that you’re a causal decision theorist?
No. (That is, to make taw incorrect you have to assume much more than that you are not a CDT agent. For example, making assumptions about what the other agents are, what they want and what they know about you.)
It seems to me that the same sort of decision-theoretic considerations that motivate me to vote at all in a large election would make it valid for me to vote my actual (relative) preference weighting in that election.
That’s true, in the limit as the number of voters goes to infinity, if you only care about which preference is ranked the highest. However, there are numerous circumstances where this isn’t the case.
Specifically, it isn’t the case if you believe that the disparity between potential winners is smaller than the maximum vote number minus the minimum and that other people do not believe the same and adjust their votes accordingly.
Relevant section from Steve Rayhawk’s nursery effect link: