Here’s the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.
Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I’ve even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.
Obviously these aren’t exactly careful, step by step arguments, where if I refute some point they’ll reverse their decision and decide we should spread humanity to the stars. It’s a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be “ok sure, but what about [lists a thousand other things that are wrong with the world]”. It’s like fighting fog, because it’s not their true objection, at least not quite. It’s not like either of us feels like we’re on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule “humanity sucks”. However, obviously refuting all thousand things, one by one, isn’t a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I’m sure.
Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I’ll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater’s prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they’re more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we’re in a utopia.
However, I think I might have better luck trying to counter-counter signal. “Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn’t we be made to suffer through climate change and everything else we’ve brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we’ll learn our lesson.” [Obviously, I’m joking here.]
I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there’s no impulse to counter-counter-counter-signal, because I’ve gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I’ll bet we could then proceed to have interesting discussions on how to solve the world’s problems. If whoever I’m musing with comes up with a few ideas of their own, maybe they’ll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.
You can also point out the contradiction that they don’t seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don’t want to kill or prevent being born.
Be aware, though, that this isn’t truth-seeking. It’s debate for the fun of it.
I think there’s also a near/far thing going on. I can’t find it now, but somewhere in the rationalist diaspora someone discussed a study showing that people will donate more to help a smaller number of injured birds. That’s one reason why charity adds focus on 1 person or family’s story, rather than faceless statistics.
Combining this with what you pointed out, maybe a fun place to take the discussion would be to suggest that we start with a specific one of our friends. “Exactly. Let’s start with Bob. Alice next, then you. I’ll volunteer to go last. After all, I wouldn’t want you guys to have to suffer through the loss of all your friends, one by one. No need to thank me, it is it’s own reward.”
EDIT: I was thinking of scope insensitivity, but couldn’t remember the name. It’s not just a LW concept, but also an empirically studied bias with a Wikipedia page and everything.
However, I mis-remembered it above. It’s true that I could cherry pick numbers and say that donations went down with scope in one case, but I’m guessing that’s probably not statistically significant. People are probably willing to donate a little more, not less, to have an impact a hundred times as large. Perhaps there are effects from misleading vividness at a small scale, as I imply. However, on a large scale, the slope is likely largely positive, even if just barely.
There are both good and bad aspects of the human race, and our future could easily contain a lot which is bad. However, this is a reason to support improvements, as well as a reason to support our own destruction.
Here’s the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.
Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I’ve even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.
Obviously these aren’t exactly careful, step by step arguments, where if I refute some point they’ll reverse their decision and decide we should spread humanity to the stars. It’s a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be “ok sure, but what about [lists a thousand other things that are wrong with the world]”. It’s like fighting fog, because it’s not their true objection, at least not quite. It’s not like either of us feels like we’re on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule “humanity sucks”. However, obviously refuting all thousand things, one by one, isn’t a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I’m sure.
Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I’ll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater’s prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they’re more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we’re in a utopia.
However, I think I might have better luck trying to counter-counter signal. “Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn’t we be made to suffer through climate change and everything else we’ve brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we’ll learn our lesson.” [Obviously, I’m joking here.]
I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there’s no impulse to counter-counter-counter-signal, because I’ve gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I’ll bet we could then proceed to have interesting discussions on how to solve the world’s problems. If whoever I’m musing with comes up with a few ideas of their own, maybe they’ll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.
You can also point out the contradiction that they don’t seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don’t want to kill or prevent being born.
Be aware, though, that this isn’t truth-seeking. It’s debate for the fun of it.
I think there’s also a near/far thing going on. I can’t find it now, but somewhere in the rationalist diaspora someone discussed a study showing that people will donate more to help a smaller number of injured birds. That’s one reason why charity adds focus on 1 person or family’s story, rather than faceless statistics.
Combining this with what you pointed out, maybe a fun place to take the discussion would be to suggest that we start with a specific one of our friends. “Exactly. Let’s start with Bob. Alice next, then you. I’ll volunteer to go last. After all, I wouldn’t want you guys to have to suffer through the loss of all your friends, one by one. No need to thank me, it is it’s own reward.”
EDIT: I was thinking of scope insensitivity, but couldn’t remember the name. It’s not just a LW concept, but also an empirically studied bias with a Wikipedia page and everything.
However, I mis-remembered it above. It’s true that I could cherry pick numbers and say that donations went down with scope in one case, but I’m guessing that’s probably not statistically significant. People are probably willing to donate a little more, not less, to have an impact a hundred times as large. Perhaps there are effects from misleading vividness at a small scale, as I imply. However, on a large scale, the slope is likely largely positive, even if just barely.
There are both good and bad aspects of the human race, and our future could easily contain a lot which is bad. However, this is a reason to support improvements, as well as a reason to support our own destruction.
So it’s a half full/half empty situation.