Yes, I was going to say… How can one possibly argue that certain speculative causes are too popular and this is because they play into common cognitive biases when the examples are the fringest of the fringe and funded approximately not at all?
Let’s try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and—FOOM! -- we have a superintelligence. It’s important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they’d be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn’t worked very well in the domain of finding optimal causes.
I wish I lived on a planet where these were ‘very common appeals to commonsense’. I wonder how much a ticket there would cost?
I think it might be more for a select group of people. In the LW community, I have gotten the impression that existential risk is higher status than global poverty reduction—that’s definitely the opinion of the high status people in this community. And maybe for the specific kind of nonconformist nerd who reads Less Wrong and is likely to come across this post, transhumanism and existential risk reduction has a “coolness factor” that global poverty reduction doesn’t have.
You’re definitely right about the wider world, but many people might only care about the opinions of the 100 or so members of their in-group.
I feel like you’re just sneering at a very small point I made rather than actually engaging with it.
What I meant to say was (1) x-risk reduction is cooler and higher status in the effective altruist / LessWrong community and (2) this biases people at least a little bit. I’ll edit the essay to reflect that.
If you meant to say x-risk reduction is high-status in the EA/LW community, then yes, that makes a lot more sense than what you originally said.
But I’m not actually sure how true this is in the broader EA community. E.g. GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven’t publicly advocated x-risk reduction. So my guess is that x-risk reduction is basically just high status in the LW/MIRI/FHI world, and maybe around CEA as well due to their closeness to FHI. To the extent that x-risk reduction is high-status in that world, we should expect a bias toward x-risk reduction, but that’s a pretty small world. There’s a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction, and for this and other reasons we should expect on net for Earth to pay way, way less attention to x-risk than is warranted.
GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven’t publicly advocated x-risk reduction.
GiveWell is doing shallow analyses of catastrophic risks, and Peter Singer has written favorably on reducing x-risk, although not endorsing particular charities or interventions, and it’s not a regular theme in his presentations.
There’s a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction
Why do you think that there’s a bias against x-risk reduction in the broader world? I think that there’s a pretty strong case for x-risk reduction being underprioritized from a utilitarian perspective. But I don’t think that I’ve seen compelling evidence that it’s unappealing relative to a randomly chosen cause.
By “randomly chosen cause,” do you mean something like “Randomly chosen among the charitable causes which have at least $500k devoted to them each year” or do you mean “Randomly chosen in the space of potential causes”?
Consider the total amount sent toward the generalized cause of a randomly chosen charity with a budget of at least $500K/year. I.e., not the Local Village Center for the Blind but humanity’s total efforts to help the blind. Compare MIRI and FHI.
I agree that x-risk reduction is a lot less popular than, e.g., caring for the blind, but it doesn’t follow that people are strongly biased against caring about x-risk reduction. Note that x-risk reduction is a relatively new cause (because the issues didn’t become clear until relatively recently), whereas people have been caring for the blind for millennia. Under the circumstances, one would expect much more attention to go toward caring for the blind independently of whether people were biased against x-risk reduction specifically. I expect x-risk reduction to become more popular over time.
What?? If this is true, please pass along the message to the Gates Foundation, the United Nations, the World Economic Forum, and… almost everyone else on the planet.
Yes, I was going to say… How can one possibly argue that certain speculative causes are too popular and this is because they play into common cognitive biases when the examples are the fringest of the fringe and funded approximately not at all?
I wish I lived on a planet where these were ‘very common appeals to commonsense’. I wonder how much a ticket there would cost?
I think it might be more for a select group of people. In the LW community, I have gotten the impression that existential risk is higher status than global poverty reduction—that’s definitely the opinion of the high status people in this community. And maybe for the specific kind of nonconformist nerd who reads Less Wrong and is likely to come across this post, transhumanism and existential risk reduction has a “coolness factor” that global poverty reduction doesn’t have.
You’re definitely right about the wider world, but many people might only care about the opinions of the 100 or so members of their in-group.
This. Status matters within one’s in-group or a group one wants to be accepted as an in-group member by.
I feel like you’re just sneering at a very small point I made rather than actually engaging with it.
What I meant to say was (1) x-risk reduction is cooler and higher status in the effective altruist / LessWrong community and (2) this biases people at least a little bit. I’ll edit the essay to reflect that.
Would you agree with (1)? What about (2)?
If you meant to say x-risk reduction is high-status in the EA/LW community, then yes, that makes a lot more sense than what you originally said.
But I’m not actually sure how true this is in the broader EA community. E.g. GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven’t publicly advocated x-risk reduction. So my guess is that x-risk reduction is basically just high status in the LW/MIRI/FHI world, and maybe around CEA as well due to their closeness to FHI. To the extent that x-risk reduction is high-status in that world, we should expect a bias toward x-risk reduction, but that’s a pretty small world. There’s a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction, and for this and other reasons we should expect on net for Earth to pay way, way less attention to x-risk than is warranted.
GiveWell is doing shallow analyses of catastrophic risks, and Peter Singer has written favorably on reducing x-risk, although not endorsing particular charities or interventions, and it’s not a regular theme in his presentations.
Thanks, I didn’t know about the Singer article.
Why do you think that there’s a bias against x-risk reduction in the broader world? I think that there’s a pretty strong case for x-risk reduction being underprioritized from a utilitarian perspective. But I don’t think that I’ve seen compelling evidence that it’s unappealing relative to a randomly chosen cause.
By “randomly chosen cause,” do you mean something like “Randomly chosen among the charitable causes which have at least $500k devoted to them each year” or do you mean “Randomly chosen in the space of potential causes”?
The former.
Consider the total amount sent toward the generalized cause of a randomly chosen charity with a budget of at least $500K/year. I.e., not the Local Village Center for the Blind but humanity’s total efforts to help the blind. Compare MIRI and FHI.
Agreed.
Search for ‘million donation’ on news.google.com, first two pages:
Kentucky college gets record $250 million gift
$20-million Walton donation will boost Teach for America in LA
NIH applauds $30 million donation from NFL
Emerson College gets $2 million donation
Jim Pattison makes $5 million donation for Royal Jubilee Hospital
Eric and Wendy Schmidt donate $15 million for Governors Island park
Every time I hear a dollar amount on the news, I cringe at realizing how pathetic spending on existential risks is by comparison.
I agree that x-risk reduction is a lot less popular than, e.g., caring for the blind, but it doesn’t follow that people are strongly biased against caring about x-risk reduction. Note that x-risk reduction is a relatively new cause (because the issues didn’t become clear until relatively recently), whereas people have been caring for the blind for millennia. Under the circumstances, one would expect much more attention to go toward caring for the blind independently of whether people were biased against x-risk reduction specifically. I expect x-risk reduction to become more popular over time.