Can you define it in terms of “sensory”, “motor”, and “processing”? That is, in order to be an optimizer, you must have some awareness of the state of some system; at least two options for behavior that affect that system in some way; and a connection from awareness to action that tends to increase some objective function.
Works for bottle cap: no sensory, only one motor option.
Works for liver: senses blood, does not sense bank account. Former is a proxy for latter but a very poor one.
For bubbles? This definition would call bubbles optimizers of finding lower pressure areas of liquid, iff you say that they have the “option” of moving in some other direction. I’m OK with having a fuzzy definition in this case; in some circumstances, you might *want* to consider bubbles as optimizers, while in others, it might work better to take them as mechanical rule-followers.
Stag hunt has two equilibria and only the good one is strong. Prisoner’s dilemma has only 1 bad equilibrium. But here we’re talking asymmetrical Snowdrift/Chicken, where both the bad and good equilibria are strong, but, if there’s uncertainty about which is which, the best outcome is non-equilibrium mutual cooperation.
Condorcet is good. The one fundamental sense in which 3-2-1 is better is a better resistance to dark horse pathology, especially in the context of combined delegated and tactical voting. In Condorcet, in a highly-polarized situation, somebody 90% of everybody’s never heard of might be the Condorcet winner because each side rates them above the other. In 3-2-1, that person never makes it to the top 3.
This is not a strong argument, but it’s the one I have.
As regards IRV, it’s definitely worse than either.
“Summable” voting methods require only anonymous tallies (totals by candidate or candidate pair) to find a winner. These do not suffer from the problem you suggest.
But for non-summable methods, such as IRV/RCV/STV, you are absolutely correct. These methods must sacrifice either verifiability/auditability or anonymity. This is just one of the reasons such reforms are not ideal (though still better than choose-one voting, aka plurality).
I think this is an unfixably bad idea, in two ways: it’s a nonstarter politically, and it would be bad if it did get implemented.
I largely agree with the section on what’s wrong with the current situation.
But this goes off the rails when it asserts, in passing, that score voting is immune to the Gibbard-Satterthwaite theorem. Read the Satterthwaite proof of this theorem, and you’ll see how general it is. Cardinal voting escapes Arrow’s theorem, but does NOT escape G-S.
In particular, any proportional method is subject to free riding strategy. And since this system is designed to be proportional across time as well as seats, free riding strategy would be absolutely pervasive, and I suspect it would take the form of deliberately voting for the craziest possible option. If I’m right then, like Borda, this system could actually be worse than random-ballot-single-winner; impressively bad.
I think it’s great that you’re thinking about structural reform and voting reform, and you’re on the right track in many regards. I just hope you can let go of this particular idea. I’m sorry to be so negative, but I think it’s warranted here.
Density for almost-equal numbers of votes is not lower in most high-stakes elections. I’d say 1 in 5 million or so. That’s just a bit more than one order of magnitude and doesn’t substantially change the overall conclusions.
The case would rely on curvature in the sigmoid that describes probability of winning the election as a function of participation. And you’re right, that makes it decidedly a second- or third-order effect; to first order, correlation is irrelevant.
I have no idea if there is such a bound. I will never have any idea if there is such a bound, and I suspect that neither will any entity in this universe. Given that fact, I’d rather make the assumption that doesn’t turn me stupid when Pascal’s Wager comes up.
On (1): if you can’t tell who the better candidate is, voting is working. You shouldn’t use that example to reason about what would happen if you didn’t vote. It’s not a one-off game.
On (2): this is true, but it’s also a fully general argument. Doing anything contributes to mind-kill, as you become attached to the idea that it was the right thing to do.
I’m tempted to erase the following argument because it’s a bit of a cheap shot “gotcha”, but it does also serve the legit purpose of an example, so here goes: For instance, not voting contributes to assuming that anybody who thinks Clinton is an EA cause is mind-killed. (Note: I think that high-profile political campaigns are awash in cash and don’t use it effectively, so I would never recommend high-profile political donations as EA. And you may be right that there’s no argument of sufficient rigor to show that Clinton was better than Trump in x-risk terms. But I strongly suspect that you feel more immediate contempt for somebody who says “donating to Clinton is EA” than for somebody who says “donating to the EFF is EA”, in a way that is slightly mind-killing.)
I am suggesting establishing a policy of voting (“being a voter”) as an x-risk strategy. Once you have that policy, voting is just an everyday action, only indirectly related to x- risk. This distinction makes sense to me but now that you mention it I’m sure there are those for whom it’s nonsense.
When you’re faced with numbers like 3^^^3, scope insensitivity is the correct response. A googolplex is already enough to hold every possible configuration of Life as we know it. “Hamlet, but with extra commas in these three places, performed by intelligent starfish” is in there somewhere in over a googol different varieties. What, then, does 3^^^3 add except more copies of the same?
Lobbying, or campaigning?
I think that there are various distinctions between lobbying, campaigning, and voting. Similar logic may or may not apply across these domains.
I don’t think that normal humans can live on the bleeding edge of maximum effectiveness every waking moment. I don’t presume to give advice to those who aren’t normal humans.
With quantum branching, our universe could have some number like a googolplex of stuff, maybe more. And philosophically, you’re worried about the difference between that and 3^^^3? I get that there’s a big gap there but I’d guess it’s one that we’re definitionally unable to do useful moral reasoning about.
I’m saying that law thinking can seem to forget that the map (model) will never be the territory. The real world has real invariants but these are not simply reproduced in reasonable utility functions.
This doesn’t pass my ITT for anti-law-thinking. The step where law thinking goes wrong is when it assumed that there exists a map that is the territory, and thus systematically underestimates the discrepancies involved in (for instance) optimizing for min Euclidean distance.
I realize that this post addresses that directly, but then it spends a lot of energy on something else which isn’t the real problem in my book.
That was surprisingly good. I’ve never let my inner Pat Modesto be the boss, but I’ve never tried to kick them out either. This makes me consider whether I should. Which is a lot more than I get out of most of Eliezer’s writing.
And here’s what kicking Pat out would let me say: I think that I’ve designed at least 5 voting methods that are each the best solution currently in the world to the problem it solves, and at least 3 of those problems are at least 25% likely to be adequately-posed (including pragmatic considerations) and important (fixing would be roughly order of one-off value of $1e12, with SD 1 in the exponent). I think that if you find this sufficiently plausible you should contact me.
It’s OK to say “I think you’re criticizing me wrong”, and it’s OK to say “the community norms are that you’re criticizing them wrong”, but I’m uncomfortable when this piece says “the community norms are [that is, should be] that you’re criticizing me wrong”. If you’re going to assume the mantle of neutral community arbiter of norms, even tentatively, you have to not only be impartial; you have to appear impartial.
Other than that, well done; I agree with most of it.
I agree with everything in this post, but won’t upvote it, because I think upvotes should signal “I want more like this” not “I agree with this”. I don’t want less like this, but I think this is enough.
(On the same principle, you probably shouldn’t upvote this comment unless its score is negative.)