I’m sorry to say that this all seems rather muddled. I don’t know how much of the muddle is actually in my brain.
You say “Effective Altruism isn’t utilitarian” and then link to an LW post whose central complaint is that EA is too utilitarian. Then you say “EA is prioritarian” by which I guess you mean it says “pick the most important cause and give only to it” and link to an LW post that doesn’t say anything remotely like that (it just says: here is one particular cause, see how much good you can do by giving to it).
You say GiveWell doesn’t see market efficiency as inherently valuable. I am not aware of any evidence for that; what there is evidence for is that they don’t see market efficiency as something worth throwing money at, and I have to say this seems very obviously correct; am I missing something here?
You say GiveWell’s “theory of value relates to health status”, by which I think you mean that they assess benefit as increase in QALYs. That seems pretty reasonable to me and I don’t understand your objections. (I’m sure there are ways one can help people that don’t show up in a QALY measurement, but when evaluating charities that aim to save lives or cure diseases—which is a large fraction of what charities targeting the world’s neediest people are doing—it seems reasonable; and when they look at e.g. GiveDirectly I don’t think they try to translate everything into QALYs.) Would you like to clarify what you’re objecting to and why?
You say “Donation is inherently supply driven, so it will inevitably be inefficient” but the whole point of EA is to try and figure out where the demand is and move donations there. (Except that “demand” needs to be reinterpreted slightly. “Need” would be a better term.)
I don’t understand your paragraph beginning “Inefficient market for warm fuzzies” at all, but I doubt it matters since EA is supposed to be all about what one does to actually help people; warm fuzzies should be “purchased separately”.
You don’t have much to say about how you think this could all be done better. You talk about “market based solutions” but (to me—perhaps others are cleverer) it’s far from clear what these might be. Markets, roughly, optimize for utility weighted by wealth, and unsusprisingly enough the worst-off people by most measures tend to be very poor. Accordingly, no demand-driven market-based solution can possibly do much for them because they haven’t enough money to generate much demand in the economic sense. (Even if they had access to the relevant markets, which as you mention in passing they may well not.) So … what do you have in mind, and why is it credible that it comes closer to maximizing utility than present-day EA?
You say “Effective Altruism isn’t utilitarian” and then link to an LW post whose central complaint is that EA is too utilitarian.
Read the first comment on that post and the discussion the OP has with them.
. Then you say “EA is prioritarian” by which I guess you mean it says “pick the most important cause and give only to it”
No, I’m saying that it ‘chooses more important causes and weights them higher’.
and link to an LW post that doesn’t say anything remotely like that (it just says: here is one particular cause, see how much good you can do by giving to it).
Is this the flow through effects link? I’m not sure what you’re talking about.
You say GiveWell doesn’t see market efficiency as inherently valuable. I am not aware of any evidence for that; what there is evidence for is that they don’t see market efficiency as something worth throwing money at, and I have to say this seems very obviously correct; am I missing something here?
The evidence that they believe that is in the link- where Givewell says it and the other links are to 80K or GWWC echoing it (don’t recall which from memory).
I would say you are missing something—whether market efficiency is something worth throwing money at—well, market efficiency by definition refers to a case where money is beying thrown at something that is worthwhile—a coincidence of interest in supply and demand.
You say GiveWell’s “theory of value relates to health status”, by which I think you mean that they assess benefit as increase in QALYs. That seems pretty reasonable to me and I don’t understand your objections. (I’m sure there are ways one can help people that don’t show up in a QALY measurement, but when evaluating charities that aim to save lives or cure diseases—which is a large fraction of what charities targeting the world’s neediest people are doing—it seems reasonable; and when they look at e.g. GiveDirectly I don’t think they try to translate everything into QALYs.) Would you like to clarify what you’re objecting to and why?
Certainly. If QALY’s are valuable, then curing disease and saving lives is inherently valuable. However, people experience death and disease differently. Very differently. How can we work out how ‘bad’ that is for them—well we could use QALY’s and generalise for the entire disease for all people—or, we could infer it from what people actually do in relation to it. Do they save up money to buy bednets, or do they spend that money on a donkey to visit their girlfriend in the next village (that’s a fictional kinda silly example but illustrates my point). If they have a preference for bed nets above all other alternative options, and still can’t afford it, they have incentive to contribute their labour, for instance, to their community in a way that improves the lives of others and helps those people reach their preferences, while earning money to buy those bednets. If they can’t be valuable to their community, then their death is an overall positive to the overall economic efficiency of their community. That is, unless they are artificially subsidised for that kind of lifestyle by certain kinds of charity.
You say “Donation is inherently supply driven, so it will inevitably be inefficient” but the whole point of EA is to try and figure out where the demand is and move donations there. (Except that “demand” needs to be reinterpreted slightly. “Need” would be a better term.)
Demand can only be reliably inferred from past behaviour. If someone buys a loaf of bread every week, that’s demand for bread. If there’s 1⁄23 chance someone in a village gets cholera every year, and that village has a reputation for being able to afford the cholera treatment, then that’s demand for cholera treatment. People ‘demanding’ or begging, or a tourist feeling sory for someone out of judgement for some kind of inferior lifestyle subjectively is not demand. It can be interpreted as need, or even modelled as a need by consequence to something else: ie—you need to eat food to survive—but then the question is something else—are you donating because they ‘demand something?’ - that you’re fulfilling a subjective desire or utility state for them—which I believe is empathy driven, or are you fulfilling a utility conditional of your guilt or something else?
I don’t understand your paragraph beginning “Inefficient market for warm fuzzies” at all, but I doubt it matters since EA is supposed to be all about what one does to actually help people; warm fuzzies should be “purchased separately”.
If a non EA get’s 100% warm fuzzies from donating to save polar bears or another thing they intuit. Their dynamic inconsistency means their cause preference changes, and it’s not big deal for them to switch charities when they feel like it.
An EA gets warm fuzzies only if they can satisfy some complicated equation and approval of their EA buddies, while that approval changes as information gets updated. However, they’re also fighting against their intuitive warm fuzzies for things like polar bears, and the same dynamic inconsistency amongst non-effective causes that non-EA’s are—for instance, they feel like donating to save guide dogs when they are primed by seeing a local blind man. Since this is for more complicated, the prospect of regret would be higher—at least, I think so intuitively, no?
Markets, roughly, optimize for utility weighted by wealth
I had never thought about it like that. I have to think about this some more. What a novel way of looking at it—thanks!
unsusprisingly enough the worst-off people by most measures tend to be very poor.
That’s just your opinion. Many tourists love unique and different cultures for their own sake. Or they might have a unique language to share, or anything. If they are alive, it’s because they have survived in an evolutionarily sound way till now so as a rough hereustic—they’re okay until there’s some kind of disaster event.
what do you have in mind, and why is it credible that it comes closer to maximizing utility than present-day EA?
I think setting up less difficult conditions for maximum utility makes it easier to maximise your utility. There’s no need to slap a label on it. If I call something ‘effective fruit eating’ where I maximise my utility by successfully eating the sultanas across the room from me right now, it’s not very hard for me to maximise my utility.
Could you explain the idea of markets optimising for utility weighted by wealth more? I’m having trouble wrapping my head around the concept.
edit 1: perhaps existing EA’s could maximise their utility more by getting treated for scrupulosity?
Read the first comment on that post and the discussion the OP has with them.
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias’s complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn’t utilitarian enough.)
No, I’m saying that it ‘chooses more important causes and weights them higher’.
And you regard that as a bad thing? Evidently I’m missing something, because weighting more important things more highly seems obviously sensible. What am I missing?
Is this the flow through effects link?
No, it’s the one linked to the word “prioritarian” in your comment.
The evidence that they believe that is in the link
Have either you or I got something exactly backwards? The post at the far end of that link (the “flow-through effects” one, right?) has the founder of GiveWell saying explicitly that market efficiency is valuable, but you’re citing it as support for your claim that GiveWell doesn’t see market efficiency as valuable.
market efficiency by definition refers to a case where money is being thrown at something that is worthwhile
Any transaction in any market (efficient or not) is such a case (at least with a suitable, somewhat nonstandard, definition of “worthwhile”, but I think you need that for any claim along these lines to be true). It is not clear that the difference between a more and a less efficient market is in how money is being thrown at how-worthwhile things. (Is it?)
well we could use QALY’s and generalise for the entire disease for all people—or, we could infer it from what people actually do in relation to it
Sure. But if what you’re trying to do is get an overall estimate of how much good a particular intervention does (or, harder: how much good it would do) then (1) you are not particularly interested in all those personal idiosyncrasies, except in so far as they come together to make some kind of average, and (2) you almost certainly don’t have enough information about people’s actions to know how much they would value whatever-it-is—because it may simply not be available to them; they may not know about it; they may not know enough about it; and, in the sort of market-based scenario I think you have in mind, perceived benefit is confounded with ability to pay.
(I’ll have more to say about that last point later, but one crude example for now. Imagine someone who is in prison and has either no possessions, or at any rate no access to his possessions. He is tortured for three hours every day. You have a wonderful new device, the Tortur-B-Gon, which magically confers immunity to torture. Words can barely express how much benefit our hypothetical prisoner would get from the Tortur-B-Gon, but you will never find that out by putting it for sale on the open market and waiting, because the prisoner doesn’t know about the market, can’t get to the shops, and can’t pay for the device.)
Demand can only be reliably inferred from past behaviour. [...etc...]
You are, I think, taking “demand” strictly in the economic sense of willingness to pay. OK, but then note that the supply-versus-demand dichotomy you’re appealing to isn’t exhaustive; there are things that happen that are not either supply or demand. In particular, charitable donation is not “supply-driven” if we take “supply” strictly in the economic sense of willingness to produce at a given price; charitable donation is not the same thing as selling.
Suppose I dedicate my life to understanding patterns of starvation, and I find various patterns that extremely reliably predict when and where a lot of people are likely to starve to death. I also conduct research into how effective various obvious measures (e.g., dropping food parcels by helicopter, walking in and handing out money, or when there’s enough warning doing things like supplying fertilizer for crops ahead of time) will be in reducing starvation, and I find various highly predictive patterns there too.
And then I watch the world for these patterns, and when I find a place and time where lots of people are likely to starve to death and one of the readily available countermeasures is likely to be successful, I do it. (Of course this costs a pile of money; let’s suppose I’m rich.)
The result will be that a lot of people will survive who would otherwise have starved to death.
You may, if you please, categorize this as “supply-driven” and say it must therefore be inefficient. Does this insight enable you either to tell me why the scenario I’ve described is impossible, or else to show how to save more lives for the same amount of money by not being “supply-driven”?
(I’m still not sure I understand what you’re saying about warm fuzzies, but I still don’t think it matters because EA is not about warm fuzzies so I’m not going to try very hard.)
That’s just your opinion.
Everything I say is just my opinion. Do you mean something more than that? (And is it in fact your opinion that the worst-off people by most measures don’t tend to be very poor? For instance, suppose we looked at the following populations: 1. People who have involuntarily had nothing to eat for at least five days in the last month. 2. Parents who have had at least three children die. 3. People who die before the age of 40. I’m guessing that those groups are all statistically a lot poorer than the population as a whole.)
I have no idea what tourists’ love of unique and different cultures has to do with this. I agree that someone who is still alive is necessarily still alive and that puts an upper bound on how things are for them, but it seems to me to be a very low upper bound.
I think setting up less difficult conditions for maximum utility makes it easier to maximise your utility. There’s no need to slap a label on it.
Sorry, I don’t think I understand how that’s responsive to the question I asked. Is there any chance that you could answer it (or, of course, explain why you choose not to) more explicitly?
the idea of markets optimising for utility weighted by wealth
What markets give us (in theory, subject to various conditions) is a Pareto-efficient allocation of resources. And there’s a theorem that says that (in theory, subject to various conditions) one can get any Pareto-efficient allocation of resources by doing a bunch of pure money-transfer operations and then letting the market do its thing.
That’s nice, and it indicates that the market is optimizing something that increases as individual utility does: some notion of net utility. But what, exactly? Well, it needs to be one that regards those money-transfers as net-utility neutral.
So, suppose I have $1M and you have $1K, and otherwise we’re fairly similar. Because of the diminishing marginal utility of money, a given amount of money is worth more to you than to me. A common approximation is to say that if you have $X then the marginal utility of an extra $1 is roughly proportional to 1/X; equivalently, that the marginal utility of an extra $1 is roughly proportional to 1/wealth. In that case, an extra $1 for you gains you about as much extra happiness as an extra $1K for me. Consider a transaction in which I find 1000 people like you and pay you each $1 in exchange for what you consider to be $1 worth of inconvenience or pain; I have lost $1K but will be content if I get what I consider to be $1K worth of convenience or pleasure. So we have a possible transaction to which all participants are indifferent: I get a certain amount of happiness; 1000 people each get a roughly equivalent amount of unhappiness; and some money is transferred between us. If money transfers are net-utility-neutral, then by reversing those transfers we get another simpler “utility-neutral” transaction: X units of happiness for me, X units of unhappiness each for 1000 people. So long as they’re 1000x poorer than me.
I’m sorry to say that this all seems rather muddled. I don’t know how much of the muddle is actually in my brain.
You say “Effective Altruism isn’t utilitarian” and then link to an LW post whose central complaint is that EA is too utilitarian. Then you say “EA is prioritarian” by which I guess you mean it says “pick the most important cause and give only to it” and link to an LW post that doesn’t say anything remotely like that (it just says: here is one particular cause, see how much good you can do by giving to it).
You say GiveWell doesn’t see market efficiency as inherently valuable. I am not aware of any evidence for that; what there is evidence for is that they don’t see market efficiency as something worth throwing money at, and I have to say this seems very obviously correct; am I missing something here?
You say GiveWell’s “theory of value relates to health status”, by which I think you mean that they assess benefit as increase in QALYs. That seems pretty reasonable to me and I don’t understand your objections. (I’m sure there are ways one can help people that don’t show up in a QALY measurement, but when evaluating charities that aim to save lives or cure diseases—which is a large fraction of what charities targeting the world’s neediest people are doing—it seems reasonable; and when they look at e.g. GiveDirectly I don’t think they try to translate everything into QALYs.) Would you like to clarify what you’re objecting to and why?
You say “Donation is inherently supply driven, so it will inevitably be inefficient” but the whole point of EA is to try and figure out where the demand is and move donations there. (Except that “demand” needs to be reinterpreted slightly. “Need” would be a better term.)
I don’t understand your paragraph beginning “Inefficient market for warm fuzzies” at all, but I doubt it matters since EA is supposed to be all about what one does to actually help people; warm fuzzies should be “purchased separately”.
You don’t have much to say about how you think this could all be done better. You talk about “market based solutions” but (to me—perhaps others are cleverer) it’s far from clear what these might be. Markets, roughly, optimize for utility weighted by wealth, and unsusprisingly enough the worst-off people by most measures tend to be very poor. Accordingly, no demand-driven market-based solution can possibly do much for them because they haven’t enough money to generate much demand in the economic sense. (Even if they had access to the relevant markets, which as you mention in passing they may well not.) So … what do you have in mind, and why is it credible that it comes closer to maximizing utility than present-day EA?
Thanks for your comment.
Read the first comment on that post and the discussion the OP has with them.
No, I’m saying that it ‘chooses more important causes and weights them higher’.
Is this the flow through effects link? I’m not sure what you’re talking about.
The evidence that they believe that is in the link- where Givewell says it and the other links are to 80K or GWWC echoing it (don’t recall which from memory).
I would say you are missing something—whether market efficiency is something worth throwing money at—well, market efficiency by definition refers to a case where money is beying thrown at something that is worthwhile—a coincidence of interest in supply and demand.
Certainly. If QALY’s are valuable, then curing disease and saving lives is inherently valuable. However, people experience death and disease differently. Very differently. How can we work out how ‘bad’ that is for them—well we could use QALY’s and generalise for the entire disease for all people—or, we could infer it from what people actually do in relation to it. Do they save up money to buy bednets, or do they spend that money on a donkey to visit their girlfriend in the next village (that’s a fictional kinda silly example but illustrates my point). If they have a preference for bed nets above all other alternative options, and still can’t afford it, they have incentive to contribute their labour, for instance, to their community in a way that improves the lives of others and helps those people reach their preferences, while earning money to buy those bednets. If they can’t be valuable to their community, then their death is an overall positive to the overall economic efficiency of their community. That is, unless they are artificially subsidised for that kind of lifestyle by certain kinds of charity.
Demand can only be reliably inferred from past behaviour. If someone buys a loaf of bread every week, that’s demand for bread. If there’s 1⁄23 chance someone in a village gets cholera every year, and that village has a reputation for being able to afford the cholera treatment, then that’s demand for cholera treatment. People ‘demanding’ or begging, or a tourist feeling sory for someone out of judgement for some kind of inferior lifestyle subjectively is not demand. It can be interpreted as need, or even modelled as a need by consequence to something else: ie—you need to eat food to survive—but then the question is something else—are you donating because they ‘demand something?’ - that you’re fulfilling a subjective desire or utility state for them—which I believe is empathy driven, or are you fulfilling a utility conditional of your guilt or something else?
If a non EA get’s 100% warm fuzzies from donating to save polar bears or another thing they intuit. Their dynamic inconsistency means their cause preference changes, and it’s not big deal for them to switch charities when they feel like it.
An EA gets warm fuzzies only if they can satisfy some complicated equation and approval of their EA buddies, while that approval changes as information gets updated. However, they’re also fighting against their intuitive warm fuzzies for things like polar bears, and the same dynamic inconsistency amongst non-effective causes that non-EA’s are—for instance, they feel like donating to save guide dogs when they are primed by seeing a local blind man. Since this is for more complicated, the prospect of regret would be higher—at least, I think so intuitively, no?
I had never thought about it like that. I have to think about this some more. What a novel way of looking at it—thanks!
That’s just your opinion. Many tourists love unique and different cultures for their own sake. Or they might have a unique language to share, or anything. If they are alive, it’s because they have survived in an evolutionarily sound way till now so as a rough hereustic—they’re okay until there’s some kind of disaster event.
I think setting up less difficult conditions for maximum utility makes it easier to maximise your utility. There’s no need to slap a label on it. If I call something ‘effective fruit eating’ where I maximise my utility by successfully eating the sultanas across the room from me right now, it’s not very hard for me to maximise my utility.
Could you explain the idea of markets optimising for utility weighted by wealth more? I’m having trouble wrapping my head around the concept.
edit 1: perhaps existing EA’s could maximise their utility more by getting treated for scrupulosity?
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias’s complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn’t utilitarian enough.)
And you regard that as a bad thing? Evidently I’m missing something, because weighting more important things more highly seems obviously sensible. What am I missing?
No, it’s the one linked to the word “prioritarian” in your comment.
Have either you or I got something exactly backwards? The post at the far end of that link (the “flow-through effects” one, right?) has the founder of GiveWell saying explicitly that market efficiency is valuable, but you’re citing it as support for your claim that GiveWell doesn’t see market efficiency as valuable.
Any transaction in any market (efficient or not) is such a case (at least with a suitable, somewhat nonstandard, definition of “worthwhile”, but I think you need that for any claim along these lines to be true). It is not clear that the difference between a more and a less efficient market is in how money is being thrown at how-worthwhile things. (Is it?)
Sure. But if what you’re trying to do is get an overall estimate of how much good a particular intervention does (or, harder: how much good it would do) then (1) you are not particularly interested in all those personal idiosyncrasies, except in so far as they come together to make some kind of average, and (2) you almost certainly don’t have enough information about people’s actions to know how much they would value whatever-it-is—because it may simply not be available to them; they may not know about it; they may not know enough about it; and, in the sort of market-based scenario I think you have in mind, perceived benefit is confounded with ability to pay.
(I’ll have more to say about that last point later, but one crude example for now. Imagine someone who is in prison and has either no possessions, or at any rate no access to his possessions. He is tortured for three hours every day. You have a wonderful new device, the Tortur-B-Gon, which magically confers immunity to torture. Words can barely express how much benefit our hypothetical prisoner would get from the Tortur-B-Gon, but you will never find that out by putting it for sale on the open market and waiting, because the prisoner doesn’t know about the market, can’t get to the shops, and can’t pay for the device.)
You are, I think, taking “demand” strictly in the economic sense of willingness to pay. OK, but then note that the supply-versus-demand dichotomy you’re appealing to isn’t exhaustive; there are things that happen that are not either supply or demand. In particular, charitable donation is not “supply-driven” if we take “supply” strictly in the economic sense of willingness to produce at a given price; charitable donation is not the same thing as selling.
Suppose I dedicate my life to understanding patterns of starvation, and I find various patterns that extremely reliably predict when and where a lot of people are likely to starve to death. I also conduct research into how effective various obvious measures (e.g., dropping food parcels by helicopter, walking in and handing out money, or when there’s enough warning doing things like supplying fertilizer for crops ahead of time) will be in reducing starvation, and I find various highly predictive patterns there too.
And then I watch the world for these patterns, and when I find a place and time where lots of people are likely to starve to death and one of the readily available countermeasures is likely to be successful, I do it. (Of course this costs a pile of money; let’s suppose I’m rich.)
The result will be that a lot of people will survive who would otherwise have starved to death.
You may, if you please, categorize this as “supply-driven” and say it must therefore be inefficient. Does this insight enable you either to tell me why the scenario I’ve described is impossible, or else to show how to save more lives for the same amount of money by not being “supply-driven”?
(I’m still not sure I understand what you’re saying about warm fuzzies, but I still don’t think it matters because EA is not about warm fuzzies so I’m not going to try very hard.)
Everything I say is just my opinion. Do you mean something more than that? (And is it in fact your opinion that the worst-off people by most measures don’t tend to be very poor? For instance, suppose we looked at the following populations: 1. People who have involuntarily had nothing to eat for at least five days in the last month. 2. Parents who have had at least three children die. 3. People who die before the age of 40. I’m guessing that those groups are all statistically a lot poorer than the population as a whole.)
I have no idea what tourists’ love of unique and different cultures has to do with this. I agree that someone who is still alive is necessarily still alive and that puts an upper bound on how things are for them, but it seems to me to be a very low upper bound.
Sorry, I don’t think I understand how that’s responsive to the question I asked. Is there any chance that you could answer it (or, of course, explain why you choose not to) more explicitly?
What markets give us (in theory, subject to various conditions) is a Pareto-efficient allocation of resources. And there’s a theorem that says that (in theory, subject to various conditions) one can get any Pareto-efficient allocation of resources by doing a bunch of pure money-transfer operations and then letting the market do its thing.
That’s nice, and it indicates that the market is optimizing something that increases as individual utility does: some notion of net utility. But what, exactly? Well, it needs to be one that regards those money-transfers as net-utility neutral.
So, suppose I have $1M and you have $1K, and otherwise we’re fairly similar. Because of the diminishing marginal utility of money, a given amount of money is worth more to you than to me. A common approximation is to say that if you have $X then the marginal utility of an extra $1 is roughly proportional to 1/X; equivalently, that the marginal utility of an extra $1 is roughly proportional to 1/wealth. In that case, an extra $1 for you gains you about as much extra happiness as an extra $1K for me. Consider a transaction in which I find 1000 people like you and pay you each $1 in exchange for what you consider to be $1 worth of inconvenience or pain; I have lost $1K but will be content if I get what I consider to be $1K worth of convenience or pleasure. So we have a possible transaction to which all participants are indifferent: I get a certain amount of happiness; 1000 people each get a roughly equivalent amount of unhappiness; and some money is transferred between us. If money transfers are net-utility-neutral, then by reversing those transfers we get another simpler “utility-neutral” transaction: X units of happiness for me, X units of unhappiness each for 1000 people. So long as they’re 1000x poorer than me.