It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.
My impression is that politics is more prominent and more intense than it used to be, and that this is harming people’s reasonableness, but that there’s been no decline outside of that. I feel like I see fewer outright uninformed or stupid arguments than I used to; probably this has to do with faster access to information and to feedback on reasoning. EA and AI risk memes have been doing relatively well in the 2010s. Maybe that’s just because they needed some time to germinate, but it’s still worth noting.
It didn’t look to me like my disagreement with your comment was caused by hasty summarization, given how specific your comment was on this point, so I figured this wasn’t among the aspects you were hoping people wouldn’t comment on. Apparently I was wrong about that. Note that my comment included an explanation of why I thought it was worth making despite your request and the implicit anti-nitpicking motivation behind it, which I agree with.
If a moral hypothesis gives the wrong answers on some questions that we don’t face, that suggests it also gives the wrong answers on some questions that we do face.
Moral circle widening groups together two processes that I think mostly shouldn’t be grouped together:
1. Changing one’s values so the same kind of phenomenon becomes equally important regardless of whom it happens in (e.g. suffering in a human who lives far away)
2. Changing one’s values so more different phenomena become important (e.g. suffering in a squid brain)
Maybe if you do it right, #2 reduces to #1, but I don’t think that should be assumed.
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)
I’ll quibble with this definition anyway because I think many people get it wrong. The way I read CEV, it doesn’t claim that extrapolated preferences cohere, but specifically picks out the parts that cohere, and it does so in a way that’s interleaved with the extrapolation step instead of happening after the extrapolation step is over.
If it were up to me, I’d use “CEV” to refer to the proposal Eliezer calls “CEV” in his original article (which I think could be cashed out either in a way where applying the concept to subselves makes sense or in a way where that does not make sense), use “extrapolated volition” to refer to the more general class of algorithms that extrapolate people’s volitions, and use something like “true preferences” or “ideal preferences” or “preferences on reflection” when the algorithm for finding those preferences isn’t important, like in the OP.
If I’m not mistaken, “CEV” originally stood for “Collective Extrapolated Volition”, but then Eliezer changed the name when people interpreted it in more of a “tyranny of the majority” way than he intended.
In thought experiments about utilitarianism, it’s generally a good idea to consider composite beings. A bus is a utility monster in traffic. If it has 30 people in it, its interests count 30 times as much. So maybe there could be things we’d think of as one mind whose internals mapped onto the internals of a bus in a moral-value-preserving way. (I guess the repugnant conclusion is about utility monsters but for quantity instead of quality.)
One line of attack against the idea that we should reject the repugnant conclusion is to ask why the lives are barely worth living. If it’s because the many people have the same good lives but they’re p-zombies 99.9999% of the time, I can easily believe that increasing the population until there’s more total conscious experiences makes the tradeoff worthwhile.
I think in the philosophy literature it’s generally interpreted as independent of resource constraints. A quick scan of the linked SEP article seems to confirm this. Apart from the question of what Parfit said, it makes a lot of sense to consider the questions of “what is good” and “what is feasible” separately. And people find the claim that sufficiently many barely-good lives are better than fewer happy lives plenty repugnant even if it has no direct implications for population policy. (In my opinion this is largely because a life barely worth living is better than they imagine.)
The repugnant conclusion just says “a sufficiently large number of lives barely worth living is preferable to a smaller number of good lives”. It says nothing about resources; e.g., it doesn’t say that the sufficiently large number can be attained by redistributing a fixed supply.
By “following” I just meant “paying attention to”, which is automatically not low cost. I think it’s plausible that you could make decent decisions without paying any attention, but in practice people who think about rationalist arguments for/against voting do pay attention, and would pay less attention (perhaps 10-100 hours’ worth per election?) if they didn’t vote.
Thanks, I did mean per hour and I’ll edit it. I think my impression of people’s lightcones per hour is higher than yours. As a stupid model, suppose lightcone quality has a term of 1% * ln(x) or 10% * ln(x) where x is the size/power of the x-risk movement. (Various hypotheses under which the x-risk movement has surprisingly low long-term impact, e.g. humanity is surrounded by aliens or there’s some sort of moral convergence, also imply elections have no long-term impact, so maybe we should be estimating something like the quality of humanity’s attempted inputs into optimizing the lightcone.) Then you only need to increase x by 0.01% or 0.001% to win a microlightcone per lifetime. I think there are hundreds or thousands of people who can achieve this level of impact. (Or rather, I think hundreds or thousands of lifetimes’ worth of work with this level of impact will be done, and the number of people who could add some of these hours if they chose to is greater than that.) Of course, at this point it matters to estimate the parameters more accurately than to the nearest order of magnitude or two. (For example, Trump vs. Clinton was probably more closely contested than my numbers above, even in terms of expectations before the fact.) Also, of course, putting this much analysis into deciding whether to vote is more costly than voting, so the point is mostly to help us understand similar but different questions.
The real cost of voting is mostly the cost of following politics. Maybe you could vote without following politics and still make decent voting decisions, but that’s not a decision people often make in practice.
With millions of voters, the chance that you are correlated to thousands of them is much better.
It seems to me there are also millions of potential acausal trade partners in non-voting contexts, e.g. in the context of whether to spend most of your effort egoistically or altruistically and toward which cause, whether to obey the law, etc. The only special feature of voting that I can see is it gives you a share in years’ worth of policy at the cost of only a much smaller amount of your time, making it potentially unusually efficient for altruists.
Naive and extremely rough calculation that doesn’t take logical correlations into account: If you’re in the US and your uncertainty about vote counts is in the tens of millions and the expected vote difference between candidates is also in the tens of millions, then the expected number of elections swayed by the marginal vote might be 1 in 100 million (because almost-equal numbers of votes have lower probability density). If 0.1% of the quality of our future lightcone is at stake, voting wins an expected 10 picolightcones. If voting takes an hour, then it’s worth it iff you’re otherwise winning less than 10 picolightcones per hour. If a lifetime is 100,000 hours, that means less than a microlightcone per lifetime. The popular vote doesn’t determine the outcome, of course, so the relevant number is much smaller in a non-swing state and larger in a swing state or if you’re trading votes with someone in a swing state.
If your decision is determined by an x-risk perspective, it seems to me you only correlate with others whose decision is determined by an x-risk perspective, and logical correlations become irrelevant because their votes decrease net x-risk if and only if yours does (on expectation, after conditioning on the right information). This doesn’t seem to be the common wisdom, so maybe I’m missing something. At least a case for taking logical correlations into account here would have to be more subtle than the more straightforward case for acausal cooperation between egoists.
LW is a public website existing in a conflict-theorist world. My impression is discussions on this subject and various others are doomed to be “fake” in the sense that important considerations will be left out, and will provide material for critics to misrepresent as being typical of rationalists. If I recall correctly, a somewhat similar thread on LW 1.0 (I can’t immediately find it, but it involved someone being on fire as a metaphor) turned into a major blow-up that people left the site over. I don’t see any upside to outweigh these downsides. Maybe there’s honor in being able to handle this, but if we can’t handle this, then that doesn’t mean it will help to act as if we can.
I agree that it doesn’t affect many users and didn’t mean to claim it should be a priority.
I didn’t mean to argue that this deserves mod attention, just that it shouldn’t have been posted or commented on.