Read the first comment on that post and the discussion the OP has with them.
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias’s complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn’t utilitarian enough.)
No, I’m saying that it ‘chooses more important causes and weights them higher’.
And you regard that as a bad thing? Evidently I’m missing something, because weighting more important things more highly seems obviously sensible. What am I missing?
Is this the flow through effects link?
No, it’s the one linked to the word “prioritarian” in your comment.
The evidence that they believe that is in the link
Have either you or I got something exactly backwards? The post at the far end of that link (the “flow-through effects” one, right?) has the founder of GiveWell saying explicitly that market efficiency is valuable, but you’re citing it as support for your claim that GiveWell doesn’t see market efficiency as valuable.
market efficiency by definition refers to a case where money is being thrown at something that is worthwhile
Any transaction in any market (efficient or not) is such a case (at least with a suitable, somewhat nonstandard, definition of “worthwhile”, but I think you need that for any claim along these lines to be true). It is not clear that the difference between a more and a less efficient market is in how money is being thrown at how-worthwhile things. (Is it?)
well we could use QALY’s and generalise for the entire disease for all people—or, we could infer it from what people actually do in relation to it
Sure. But if what you’re trying to do is get an overall estimate of how much good a particular intervention does (or, harder: how much good it would do) then (1) you are not particularly interested in all those personal idiosyncrasies, except in so far as they come together to make some kind of average, and (2) you almost certainly don’t have enough information about people’s actions to know how much they would value whatever-it-is—because it may simply not be available to them; they may not know about it; they may not know enough about it; and, in the sort of market-based scenario I think you have in mind, perceived benefit is confounded with ability to pay.
(I’ll have more to say about that last point later, but one crude example for now. Imagine someone who is in prison and has either no possessions, or at any rate no access to his possessions. He is tortured for three hours every day. You have a wonderful new device, the Tortur-B-Gon, which magically confers immunity to torture. Words can barely express how much benefit our hypothetical prisoner would get from the Tortur-B-Gon, but you will never find that out by putting it for sale on the open market and waiting, because the prisoner doesn’t know about the market, can’t get to the shops, and can’t pay for the device.)
Demand can only be reliably inferred from past behaviour. [...etc...]
You are, I think, taking “demand” strictly in the economic sense of willingness to pay. OK, but then note that the supply-versus-demand dichotomy you’re appealing to isn’t exhaustive; there are things that happen that are not either supply or demand. In particular, charitable donation is not “supply-driven” if we take “supply” strictly in the economic sense of willingness to produce at a given price; charitable donation is not the same thing as selling.
Suppose I dedicate my life to understanding patterns of starvation, and I find various patterns that extremely reliably predict when and where a lot of people are likely to starve to death. I also conduct research into how effective various obvious measures (e.g., dropping food parcels by helicopter, walking in and handing out money, or when there’s enough warning doing things like supplying fertilizer for crops ahead of time) will be in reducing starvation, and I find various highly predictive patterns there too.
And then I watch the world for these patterns, and when I find a place and time where lots of people are likely to starve to death and one of the readily available countermeasures is likely to be successful, I do it. (Of course this costs a pile of money; let’s suppose I’m rich.)
The result will be that a lot of people will survive who would otherwise have starved to death.
You may, if you please, categorize this as “supply-driven” and say it must therefore be inefficient. Does this insight enable you either to tell me why the scenario I’ve described is impossible, or else to show how to save more lives for the same amount of money by not being “supply-driven”?
(I’m still not sure I understand what you’re saying about warm fuzzies, but I still don’t think it matters because EA is not about warm fuzzies so I’m not going to try very hard.)
That’s just your opinion.
Everything I say is just my opinion. Do you mean something more than that? (And is it in fact your opinion that the worst-off people by most measures don’t tend to be very poor? For instance, suppose we looked at the following populations: 1. People who have involuntarily had nothing to eat for at least five days in the last month. 2. Parents who have had at least three children die. 3. People who die before the age of 40. I’m guessing that those groups are all statistically a lot poorer than the population as a whole.)
I have no idea what tourists’ love of unique and different cultures has to do with this. I agree that someone who is still alive is necessarily still alive and that puts an upper bound on how things are for them, but it seems to me to be a very low upper bound.
I think setting up less difficult conditions for maximum utility makes it easier to maximise your utility. There’s no need to slap a label on it.
Sorry, I don’t think I understand how that’s responsive to the question I asked. Is there any chance that you could answer it (or, of course, explain why you choose not to) more explicitly?
the idea of markets optimising for utility weighted by wealth
What markets give us (in theory, subject to various conditions) is a Pareto-efficient allocation of resources. And there’s a theorem that says that (in theory, subject to various conditions) one can get any Pareto-efficient allocation of resources by doing a bunch of pure money-transfer operations and then letting the market do its thing.
That’s nice, and it indicates that the market is optimizing something that increases as individual utility does: some notion of net utility. But what, exactly? Well, it needs to be one that regards those money-transfers as net-utility neutral.
So, suppose I have $1M and you have $1K, and otherwise we’re fairly similar. Because of the diminishing marginal utility of money, a given amount of money is worth more to you than to me. A common approximation is to say that if you have $X then the marginal utility of an extra $1 is roughly proportional to 1/X; equivalently, that the marginal utility of an extra $1 is roughly proportional to 1/wealth. In that case, an extra $1 for you gains you about as much extra happiness as an extra $1K for me. Consider a transaction in which I find 1000 people like you and pay you each $1 in exchange for what you consider to be $1 worth of inconvenience or pain; I have lost $1K but will be content if I get what I consider to be $1K worth of convenience or pleasure. So we have a possible transaction to which all participants are indifferent: I get a certain amount of happiness; 1000 people each get a roughly equivalent amount of unhappiness; and some money is transferred between us. If money transfers are net-utility-neutral, then by reversing those transfers we get another simpler “utility-neutral” transaction: X units of happiness for me, X units of unhappiness each for 1000 people. So long as they’re 1000x poorer than me.
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias’s complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn’t utilitarian enough.)
And you regard that as a bad thing? Evidently I’m missing something, because weighting more important things more highly seems obviously sensible. What am I missing?
No, it’s the one linked to the word “prioritarian” in your comment.
Have either you or I got something exactly backwards? The post at the far end of that link (the “flow-through effects” one, right?) has the founder of GiveWell saying explicitly that market efficiency is valuable, but you’re citing it as support for your claim that GiveWell doesn’t see market efficiency as valuable.
Any transaction in any market (efficient or not) is such a case (at least with a suitable, somewhat nonstandard, definition of “worthwhile”, but I think you need that for any claim along these lines to be true). It is not clear that the difference between a more and a less efficient market is in how money is being thrown at how-worthwhile things. (Is it?)
Sure. But if what you’re trying to do is get an overall estimate of how much good a particular intervention does (or, harder: how much good it would do) then (1) you are not particularly interested in all those personal idiosyncrasies, except in so far as they come together to make some kind of average, and (2) you almost certainly don’t have enough information about people’s actions to know how much they would value whatever-it-is—because it may simply not be available to them; they may not know about it; they may not know enough about it; and, in the sort of market-based scenario I think you have in mind, perceived benefit is confounded with ability to pay.
(I’ll have more to say about that last point later, but one crude example for now. Imagine someone who is in prison and has either no possessions, or at any rate no access to his possessions. He is tortured for three hours every day. You have a wonderful new device, the Tortur-B-Gon, which magically confers immunity to torture. Words can barely express how much benefit our hypothetical prisoner would get from the Tortur-B-Gon, but you will never find that out by putting it for sale on the open market and waiting, because the prisoner doesn’t know about the market, can’t get to the shops, and can’t pay for the device.)
You are, I think, taking “demand” strictly in the economic sense of willingness to pay. OK, but then note that the supply-versus-demand dichotomy you’re appealing to isn’t exhaustive; there are things that happen that are not either supply or demand. In particular, charitable donation is not “supply-driven” if we take “supply” strictly in the economic sense of willingness to produce at a given price; charitable donation is not the same thing as selling.
Suppose I dedicate my life to understanding patterns of starvation, and I find various patterns that extremely reliably predict when and where a lot of people are likely to starve to death. I also conduct research into how effective various obvious measures (e.g., dropping food parcels by helicopter, walking in and handing out money, or when there’s enough warning doing things like supplying fertilizer for crops ahead of time) will be in reducing starvation, and I find various highly predictive patterns there too.
And then I watch the world for these patterns, and when I find a place and time where lots of people are likely to starve to death and one of the readily available countermeasures is likely to be successful, I do it. (Of course this costs a pile of money; let’s suppose I’m rich.)
The result will be that a lot of people will survive who would otherwise have starved to death.
You may, if you please, categorize this as “supply-driven” and say it must therefore be inefficient. Does this insight enable you either to tell me why the scenario I’ve described is impossible, or else to show how to save more lives for the same amount of money by not being “supply-driven”?
(I’m still not sure I understand what you’re saying about warm fuzzies, but I still don’t think it matters because EA is not about warm fuzzies so I’m not going to try very hard.)
Everything I say is just my opinion. Do you mean something more than that? (And is it in fact your opinion that the worst-off people by most measures don’t tend to be very poor? For instance, suppose we looked at the following populations: 1. People who have involuntarily had nothing to eat for at least five days in the last month. 2. Parents who have had at least three children die. 3. People who die before the age of 40. I’m guessing that those groups are all statistically a lot poorer than the population as a whole.)
I have no idea what tourists’ love of unique and different cultures has to do with this. I agree that someone who is still alive is necessarily still alive and that puts an upper bound on how things are for them, but it seems to me to be a very low upper bound.
Sorry, I don’t think I understand how that’s responsive to the question I asked. Is there any chance that you could answer it (or, of course, explain why you choose not to) more explicitly?
What markets give us (in theory, subject to various conditions) is a Pareto-efficient allocation of resources. And there’s a theorem that says that (in theory, subject to various conditions) one can get any Pareto-efficient allocation of resources by doing a bunch of pure money-transfer operations and then letting the market do its thing.
That’s nice, and it indicates that the market is optimizing something that increases as individual utility does: some notion of net utility. But what, exactly? Well, it needs to be one that regards those money-transfers as net-utility neutral.
So, suppose I have $1M and you have $1K, and otherwise we’re fairly similar. Because of the diminishing marginal utility of money, a given amount of money is worth more to you than to me. A common approximation is to say that if you have $X then the marginal utility of an extra $1 is roughly proportional to 1/X; equivalently, that the marginal utility of an extra $1 is roughly proportional to 1/wealth. In that case, an extra $1 for you gains you about as much extra happiness as an extra $1K for me. Consider a transaction in which I find 1000 people like you and pay you each $1 in exchange for what you consider to be $1 worth of inconvenience or pain; I have lost $1K but will be content if I get what I consider to be $1K worth of convenience or pleasure. So we have a possible transaction to which all participants are indifferent: I get a certain amount of happiness; 1000 people each get a roughly equivalent amount of unhappiness; and some money is transferred between us. If money transfers are net-utility-neutral, then by reversing those transfers we get another simpler “utility-neutral” transaction: X units of happiness for me, X units of unhappiness each for 1000 people. So long as they’re 1000x poorer than me.