should empower them and leave them “free to optimize”
Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different “let-us-manage-your-lifestyle” organizations.
very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die.
Actually, I can’t think of any. But still, what does this all have to do with central planning?
Would you like me to amend from “central” planning to “external” planning? As in, organizations who attempt to plan people’s lives in an interfering sort of way? Sorry, I just want to check if we’re about to get into a massive argument about vocabulary or whether there’s some place we are actually talking about the same thing.
Interesting; I hadn’t previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.
Still, the analogy isn’t exact. If we’re talking about basic necessities, things like food and clothes, then the argument seems strong: people’s exact needs will differ in ways that aren’t easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won’t. I’m pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly’s consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome—market defects, in other words—that cash won’t do much for in the medium term, and it’s plausible to me that many of the EA community’s traditional beneficiaries do work in this space.
As to existential risk… well, that’s a completely different approach. To borrow a phrase from GiveWell’s blog, existential risk reduction is an extreme charity-as-investment strategy, and there’s very little decent analysis covering it. I don’t entirely trust MIRI’s in-house estimates, but I couldn’t point you to anything better, either.
I guess it’s mostly a terminology thing. I associate “central planning” with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.
If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don’t know if we disagree much.
In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.
Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different “let-us-manage-your-lifestyle” organizations.
Actually, I can’t think of any. But still, what does this all have to do with central planning?
Would you like me to amend from “central” planning to “external” planning? As in, organizations who attempt to plan people’s lives in an interfering sort of way? Sorry, I just want to check if we’re about to get into a massive argument about vocabulary or whether there’s some place we are actually talking about the same thing.
Interesting; I hadn’t previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.
Still, the analogy isn’t exact. If we’re talking about basic necessities, things like food and clothes, then the argument seems strong: people’s exact needs will differ in ways that aren’t easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won’t. I’m pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly’s consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome—market defects, in other words—that cash won’t do much for in the medium term, and it’s plausible to me that many of the EA community’s traditional beneficiaries do work in this space.
As to existential risk… well, that’s a completely different approach. To borrow a phrase from GiveWell’s blog, existential risk reduction is an extreme charity-as-investment strategy, and there’s very little decent analysis covering it. I don’t entirely trust MIRI’s in-house estimates, but I couldn’t point you to anything better, either.
Well, you just raised my opinion of GiveWell.
I guess it’s mostly a terminology thing. I associate “central planning” with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.
If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don’t know if we disagree much.
In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.
Currently choosing the latter.