I’m an obsessive about voting theory, and have been for over 20 years now. As time passes and my knowledge deepens, I find that while I still feel “this is really important and people don’t pay enough attention to it”, I feel less and less that “this is MORE important than whatever people are talking about here and now, and it should be my job to make them change the subject”. Obviously I think this is a healthy change for me and my social graces, but it also means that you are more likely to hear about voting theory from a younger, shallower version of me than you are from me.
I don’t know how to solve that problem. It’s one thing to be immune enough to evangelists so that you can keep a balance of caring across multiple issues, as discussed in the post above; it’s another harder thing to be immune enough yet still curious enough to find your way past the proselytizers to the calmer, more-mature non-evangelist obsessives.
In my anecdotal experience, the kids are OK. At least as OK as we were when I was a kid in the 80s reading SF from the 60s and 70s.
If you want me to take this hypothesis more seriously than that, show more evidence.
On Gibbard-Satterthwaite, you are wrong. Please read the original papers; Wikipedia is not definitive here. There is a sense in which the sentence you quote from Wikipedia is not quite wrong, but that sense is so limited that the conclusion you draw from it is not supported.
In terms of the “craziest possible option” strategy: people may deliberately vote for something they believe will not win in order to “build up” voting power for later. When they decided to actually spend this built-up power, they would not vote for something crazy. Insofar as this strategy artificially increases their overall voting power over that of other voters, it undermines the fairness of the system. And in the worst case, it could backfire by actually electing a crazy option. In case of backfire, this would obviously not be a rational strategy ex post, but I believe the collective risk of such failed rationality is unacceptably high.
As for the “rich irony” of me calling something a nonstarter politically: just this week, approval voting passed in Fargo; and STAR voting came within a few percent of passing in Lane County, OR. Last summer, thousands of people voted on the Hugo Awards which had been nominated through E Pluribus Hugo. In British Columbia, voters are currently deciding between four election methods, three of which are proportional and two to three of which have never been used. I personally played a meaningful role in each of these efforts, and a pivotal role in some cases. All of these are clearly far beyond “nonstarter politically”. So yes, I’m not afraid to tilt at windmills sometimes, but sometimes the windmills actually are giants, and sometimes the giants lose. I believe I’ve earned some right to express an opinion about when that might be, and when it might not.
Can you define it in terms of “sensory”, “motor”, and “processing”? That is, in order to be an optimizer, you must have some awareness of the state of some system; at least two options for behavior that affect that system in some way; and a connection from awareness to action that tends to increase some objective function.
Works for bottle cap: no sensory, only one motor option.
Works for liver: senses blood, does not sense bank account. Former is a proxy for latter but a very poor one.
For bubbles? This definition would call bubbles optimizers of finding lower pressure areas of liquid, iff you say that they have the “option” of moving in some other direction. I’m OK with having a fuzzy definition in this case; in some circumstances, you might *want* to consider bubbles as optimizers, while in others, it might work better to take them as mechanical rule-followers.
Stag hunt has two equilibria and only the good one is strong. Prisoner’s dilemma has only 1 bad equilibrium. But here we’re talking asymmetrical Snowdrift/Chicken, where both the bad and good equilibria are strong, but, if there’s uncertainty about which is which, the best outcome is non-equilibrium mutual cooperation.
Condorcet is good. The one fundamental sense in which 3-2-1 is better is a better resistance to dark horse pathology, especially in the context of combined delegated and tactical voting. In Condorcet, in a highly-polarized situation, somebody 90% of everybody’s never heard of might be the Condorcet winner because each side rates them above the other. In 3-2-1, that person never makes it to the top 3.
This is not a strong argument, but it’s the one I have.
As regards IRV, it’s definitely worse than either.
“Summable” voting methods require only anonymous tallies (totals by candidate or candidate pair) to find a winner. These do not suffer from the problem you suggest.
But for non-summable methods, such as IRV/RCV/STV, you are absolutely correct. These methods must sacrifice either verifiability/auditability or anonymity. This is just one of the reasons such reforms are not ideal (though still better than choose-one voting, aka plurality).
I think this is an unfixably bad idea, in two ways: it’s a nonstarter politically, and it would be bad if it did get implemented.
I largely agree with the section on what’s wrong with the current situation.
But this goes off the rails when it asserts, in passing, that score voting is immune to the Gibbard-Satterthwaite theorem. Read the Satterthwaite proof of this theorem, and you’ll see how general it is. Cardinal voting escapes Arrow’s theorem, but does NOT escape G-S.
In particular, any proportional method is subject to free riding strategy. And since this system is designed to be proportional across time as well as seats, free riding strategy would be absolutely pervasive, and I suspect it would take the form of deliberately voting for the craziest possible option. If I’m right then, like Borda, this system could actually be worse than random-ballot-single-winner; impressively bad.
I think it’s great that you’re thinking about structural reform and voting reform, and you’re on the right track in many regards. I just hope you can let go of this particular idea. I’m sorry to be so negative, but I think it’s warranted here.
Density for almost-equal numbers of votes is not lower in most high-stakes elections. I’d say 1 in 5 million or so. That’s just a bit more than one order of magnitude and doesn’t substantially change the overall conclusions.
The case would rely on curvature in the sigmoid that describes probability of winning the election as a function of participation. And you’re right, that makes it decidedly a second- or third-order effect; to first order, correlation is irrelevant.
I have no idea if there is such a bound. I will never have any idea if there is such a bound, and I suspect that neither will any entity in this universe. Given that fact, I’d rather make the assumption that doesn’t turn me stupid when Pascal’s Wager comes up.
On (1): if you can’t tell who the better candidate is, voting is working. You shouldn’t use that example to reason about what would happen if you didn’t vote. It’s not a one-off game.
On (2): this is true, but it’s also a fully general argument. Doing anything contributes to mind-kill, as you become attached to the idea that it was the right thing to do.
I’m tempted to erase the following argument because it’s a bit of a cheap shot “gotcha”, but it does also serve the legit purpose of an example, so here goes: For instance, not voting contributes to assuming that anybody who thinks Clinton is an EA cause is mind-killed. (Note: I think that high-profile political campaigns are awash in cash and don’t use it effectively, so I would never recommend high-profile political donations as EA. And you may be right that there’s no argument of sufficient rigor to show that Clinton was better than Trump in x-risk terms. But I strongly suspect that you feel more immediate contempt for somebody who says “donating to Clinton is EA” than for somebody who says “donating to the EFF is EA”, in a way that is slightly mind-killing.)
I am suggesting establishing a policy of voting (“being a voter”) as an x-risk strategy. Once you have that policy, voting is just an everyday action, only indirectly related to x- risk. This distinction makes sense to me but now that you mention it I’m sure there are those for whom it’s nonsense.
When you’re faced with numbers like 3^^^3, scope insensitivity is the correct response. A googolplex is already enough to hold every possible configuration of Life as we know it. “Hamlet, but with extra commas in these three places, performed by intelligent starfish” is in there somewhere in over a googol different varieties. What, then, does 3^^^3 add except more copies of the same?
Lobbying, or campaigning?
I think that there are various distinctions between lobbying, campaigning, and voting. Similar logic may or may not apply across these domains.
I don’t think that normal humans can live on the bleeding edge of maximum effectiveness every waking moment. I don’t presume to give advice to those who aren’t normal humans.
With quantum branching, our universe could have some number like a googolplex of stuff, maybe more. And philosophically, you’re worried about the difference between that and 3^^^3? I get that there’s a big gap there but I’d guess it’s one that we’re definitionally unable to do useful moral reasoning about.
I’m saying that law thinking can seem to forget that the map (model) will never be the territory. The real world has real invariants but these are not simply reproduced in reasonable utility functions.
This doesn’t pass my ITT for anti-law-thinking. The step where law thinking goes wrong is when it assumed that there exists a map that is the territory, and thus systematically underestimates the discrepancies involved in (for instance) optimizing for min Euclidean distance.
I realize that this post addresses that directly, but then it spends a lot of energy on something else which isn’t the real problem in my book.