At a first reapproximation to my thinking a week ago, I was thinking of things like this. Many acts of “charity” often consist of trying to manage the lives of the unfortunate for them, and evidence is emerging that, well, the unfortunate know their own needs better than we do: we should empower them and leave them “free to optimize”, so to speak.
Not that malaria relief or anything is a bad cause, but I generally have more “feeling” regarding poverty myself, since combating poverty over the middle-term (longer than a year, shorter than a generation, let’s say) tends to result in the individual benefactors becoming able to solve a lot of their other problems, and has generational knock-on effects (such as: reduced poverty leads to better nutrition and better building materials, meaning healthier, smarter children over time, meaning people can do more to solve their remaining issues, etc.).
And then I was also definitely thinking about people trying to “do maximum good” through existential-risk reduction donations (including MIRI, but not just MIRI), and how these donations tend to be… dubiously effective. Sure, we’re not dead yet, but very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die. That is, if I want to be less-probably dead next year than this year, I don’t know to whom to donate.
EDIT: Regarding the latter paragraph, I wish to note that I did give MIRI $72 this past year, this being calculated as the equivalent price of several Harry Potter novels for which the author deserved payment. If I become convinced that MIRI/FHI are actually effective in ensuring both that AI doesn’t kill us all off, and that they can do better than throw the human species in a permanent Medieval Stasis (ie: that they can “save the world”), resulting in the much-lauded futuristic utopia they use for their recruiting pitches, I will donate larger sums quite willingly. I also want to actually engage in the scientific/philosophical problems involved myself, just to be damn sure. So don’t think I’m insulting here, just pointing out that “we’re the only ones thinking about AI risk and other x-risk” (which is mostly true: almost all popular consideration of AI risk past the level of Terminator movies has been brought on due to MIRI/FHI propagandizing) is not really very good evidence for “we’re effectively reducing the odds of AI being a problem and increasing the odds of a universe tiled in awesomeness”.
should empower them and leave them “free to optimize”
Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different “let-us-manage-your-lifestyle” organizations.
very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die.
Actually, I can’t think of any. But still, what does this all have to do with central planning?
Would you like me to amend from “central” planning to “external” planning? As in, organizations who attempt to plan people’s lives in an interfering sort of way? Sorry, I just want to check if we’re about to get into a massive argument about vocabulary or whether there’s some place we are actually talking about the same thing.
Interesting; I hadn’t previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.
Still, the analogy isn’t exact. If we’re talking about basic necessities, things like food and clothes, then the argument seems strong: people’s exact needs will differ in ways that aren’t easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won’t. I’m pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly’s consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome—market defects, in other words—that cash won’t do much for in the medium term, and it’s plausible to me that many of the EA community’s traditional beneficiaries do work in this space.
As to existential risk… well, that’s a completely different approach. To borrow a phrase from GiveWell’s blog, existential risk reduction is an extreme charity-as-investment strategy, and there’s very little decent analysis covering it. I don’t entirely trust MIRI’s in-house estimates, but I couldn’t point you to anything better, either.
I guess it’s mostly a terminology thing. I associate “central planning” with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.
If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don’t know if we disagree much.
In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.
At a first reapproximation to my thinking a week ago, I was thinking of things like this. Many acts of “charity” often consist of trying to manage the lives of the unfortunate for them, and evidence is emerging that, well, the unfortunate know their own needs better than we do: we should empower them and leave them “free to optimize”, so to speak.
Not that malaria relief or anything is a bad cause, but I generally have more “feeling” regarding poverty myself, since combating poverty over the middle-term (longer than a year, shorter than a generation, let’s say) tends to result in the individual benefactors becoming able to solve a lot of their other problems, and has generational knock-on effects (such as: reduced poverty leads to better nutrition and better building materials, meaning healthier, smarter children over time, meaning people can do more to solve their remaining issues, etc.).
And then I was also definitely thinking about people trying to “do maximum good” through existential-risk reduction donations (including MIRI, but not just MIRI), and how these donations tend to be… dubiously effective. Sure, we’re not dead yet, but very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die. That is, if I want to be less-probably dead next year than this year, I don’t know to whom to donate.
EDIT: Regarding the latter paragraph, I wish to note that I did give MIRI $72 this past year, this being calculated as the equivalent price of several Harry Potter novels for which the author deserved payment. If I become convinced that MIRI/FHI are actually effective in ensuring both that AI doesn’t kill us all off, and that they can do better than throw the human species in a permanent Medieval Stasis (ie: that they can “save the world”), resulting in the much-lauded futuristic utopia they use for their recruiting pitches, I will donate larger sums quite willingly. I also want to actually engage in the scientific/philosophical problems involved myself, just to be damn sure. So don’t think I’m insulting here, just pointing out that “we’re the only ones thinking about AI risk and other x-risk” (which is mostly true: almost all popular consideration of AI risk past the level of Terminator movies has been brought on due to MIRI/FHI propagandizing) is not really very good evidence for “we’re effectively reducing the odds of AI being a problem and increasing the odds of a universe tiled in awesomeness”.
Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different “let-us-manage-your-lifestyle” organizations.
Actually, I can’t think of any. But still, what does this all have to do with central planning?
Would you like me to amend from “central” planning to “external” planning? As in, organizations who attempt to plan people’s lives in an interfering sort of way? Sorry, I just want to check if we’re about to get into a massive argument about vocabulary or whether there’s some place we are actually talking about the same thing.
Interesting; I hadn’t previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.
Still, the analogy isn’t exact. If we’re talking about basic necessities, things like food and clothes, then the argument seems strong: people’s exact needs will differ in ways that aren’t easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won’t. I’m pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly’s consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome—market defects, in other words—that cash won’t do much for in the medium term, and it’s plausible to me that many of the EA community’s traditional beneficiaries do work in this space.
As to existential risk… well, that’s a completely different approach. To borrow a phrase from GiveWell’s blog, existential risk reduction is an extreme charity-as-investment strategy, and there’s very little decent analysis covering it. I don’t entirely trust MIRI’s in-house estimates, but I couldn’t point you to anything better, either.
Well, you just raised my opinion of GiveWell.
I guess it’s mostly a terminology thing. I associate “central planning” with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.
If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don’t know if we disagree much.
In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.
Currently choosing the latter.