The idea that efficient charity is about “doing good” and that “doing good” equates with “lives saved” is one giant availability bias. And of course there is absolutely no sense in going all in on any cause in a risky world with agent exhibiting time preference.
I’ll argue the first and second claim seperately. In the conjunction we take the first statement as a premise, let efficient charity be about maximising good. It is not obvious that “doing good” equates to preventing a preventable, premature death, my interpretation of “saving a life”, as everyone has to die at some point, cryonics not withstanding. A much better metric, which GiveWell in fact does use, is quality adjusted life years, QALY, independent of the exact choice of quality adjustments. It measures how many more “good” life years a person is expected from the intervention. And still it is difficult to see how this is absolutely equal to “good”. We see people donating to their local high school sports team, the catholic church, WWF, UNICEF, Wikipedia, the Linux foundation and many more. All these people are said to “do good”. Is “doing good” then not more about solving problems involving public goods? Of course if I can extend a life by 20 QALY for $1000 it is difficult to see how I could get the same amount of good from donating to the local high school sports team, but a couple of hours more Wikipedia uptime, especially in the developing world, could measure up.
Why then do people ride on and on about the QALY metric? Simple, it is a number that can be relatively easily calculated, instead of having to map all public goods to some kind of measure of good. It is more available, thus it is used.
Now to the second claim. We live in a risky world and we exhibit time preference. Good now is more valuable than the same amount of good later, we have a discount function. So I have a very good reason to think about whether I want to donate to GiveWell to purchase some QALY now or donate to MIRI/FHI to purchase a lot of QALY in a hundred years. But what is worse, any intervention, any donation is inherently risky as the conversion from donated money to actual good can fail, if only for failiures of the organisation receiving money. Take GiveWell’s clear fund to be a certain conversion and take their second highest rated charity to be risky with a 50% chance of donated $1000 being turned into double the QALY of the clear fund—and one QALY more—and 50% of nothing happening. (Whether this is per donation or for all donations at once is irrelevant here) Should I go all in on either? Basic betting theory says no. I’ll have to mix. And I’ll have to mix according to my values, how much I care about the world in any given future and how risky I am willing to be.
Also, I am human. I am donating not because I am altruistic, but because it gives me a good feeling. Moreover, I can only decide for myself how to donate, not for all the world at once, so there’s that.
Edit: I’m happy you linked Yvain’s article because there is a point in it about investing that makes my argument even more complicated. But I’m even more happy because the point about investing the money is something I thought about before and have to research more, again.
I don’t actually value lives saved very much. Death’s just not that big a deal. I’m more interested in producing states of wonder and joy. I want to bring as many people as can to the level of eduction and self-awareness where they can appreciate the incredibleness of the world. The saddest thing that I can think of is that there are are hundreds of thousands of people who are just as smart as I am who were never given the opportunities or encouragement that I was to come to love the world. I’m in the business of poverty elevation and disease eradication, because removing those constraints allows us to maximize fully flourished human lives. Similarly, I care about a singularity because the amount of insight to which the species has access will go through the roof.
I use “live-saved” since, as you point out, it’s a sort of hot-word to which people react and associate with “doing good.”
Should I go all in on either? Basic betting theory says no.
This is my issue. I’m not sure what justification we have for ignoring the theory, assuming we actually want to be maximally helpful. Can you elaborate?
There is absolutely no justification in ignoring betting theory. It was formulated for turning money into more money, but applies equally well for turning any one cardinal quantity into another cardinal quantity. Some time ago there was an absurdly long article on here about why one should not diversify their donations assuming there is no risk, which made the point moot.
And even if there was no risk, my utility is marginal. I’ll donate some to one cause until that desire is satisfied, I’ll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.
There is absolutely no justification in ignoring betting theory.
and
And even if there was no risk, my utility is marginal. I’ll donate some to one cause until that desire is satisfied, I’ll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.
Aren’t these mutually exclusive statements or am I misunderstanding? What is your position?
Aren’t these mutually exclusive statements or am I misunderstanding?
Misunderstanding. Assuming risk, we have to diversify. But even when we assume no risk we exhibit marginal utility from any cause, so we should diversify there too, just as you don’t put all your money above poverty into any one good.
But the reason why I don’t put all my money into one good (that said, I’m pretty close, after food and rent, its just books, travel, and charity), because my utility function has built in diminishing marginal returns. I don’t get as much enjoyment out of doing somthign that I’ve already been doing a lot. If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend. While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.
If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend.
Oh, you are sincerely concerned? Then of course any contribution you make to any efficient cause like world poverty will be virtually zero relative to the problem and spend away. But personally I can see people go “ten lives saved is good enough, let’s spend the rest on booze”. Further arguments could be made that it is unfair that only people in Africa get donations but not people in India, or similar.
But that is only the marginal argument knocked down. The risk argument still stands is way stronger anyway.
While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.
Ok. Fine maybe it’s signaling. I’m ok with that since the part of me that does really care thinks “if my desire to signal leads me to help effectively, then it’s fine in my book”, but then I’m fascinated because that part of me may, actually, be motivated by the my desire to signal my kindness. It may be signalling “all the way down”, but it seems to be alternating levels of signaling motivated by altruism motivated by signalling. Maybe it eventually stabilizes at one or the other.
I don’t care. Whether I’m doing it out of altruism or doing it for signaling (or, as I personally think, neither, but rather something more complex, involving my choice of personal identity which I suspect uses the neural architecture that was developed for playing status games, but has been generalized to be compared to an abstract ideal instead of other agent), I do want to be maximally effective.
If I know what my goals are, what motivates them is not of great consequence.
The idea that efficient charity is about “doing good” and that “doing good” equates with “lives saved” is one giant availability bias. And of course there is absolutely no sense in going all in on any cause in a risky world with agent exhibiting time preference.
I’ll argue the first and second claim seperately. In the conjunction we take the first statement as a premise, let efficient charity be about maximising good. It is not obvious that “doing good” equates to preventing a preventable, premature death, my interpretation of “saving a life”, as everyone has to die at some point, cryonics not withstanding. A much better metric, which GiveWell in fact does use, is quality adjusted life years, QALY, independent of the exact choice of quality adjustments. It measures how many more “good” life years a person is expected from the intervention. And still it is difficult to see how this is absolutely equal to “good”. We see people donating to their local high school sports team, the catholic church, WWF, UNICEF, Wikipedia, the Linux foundation and many more. All these people are said to “do good”. Is “doing good” then not more about solving problems involving public goods? Of course if I can extend a life by 20 QALY for $1000 it is difficult to see how I could get the same amount of good from donating to the local high school sports team, but a couple of hours more Wikipedia uptime, especially in the developing world, could measure up.
Why then do people ride on and on about the QALY metric? Simple, it is a number that can be relatively easily calculated, instead of having to map all public goods to some kind of measure of good. It is more available, thus it is used.
Now to the second claim. We live in a risky world and we exhibit time preference. Good now is more valuable than the same amount of good later, we have a discount function. So I have a very good reason to think about whether I want to donate to GiveWell to purchase some QALY now or donate to MIRI/FHI to purchase a lot of QALY in a hundred years. But what is worse, any intervention, any donation is inherently risky as the conversion from donated money to actual good can fail, if only for failiures of the organisation receiving money. Take GiveWell’s clear fund to be a certain conversion and take their second highest rated charity to be risky with a 50% chance of donated $1000 being turned into double the QALY of the clear fund—and one QALY more—and 50% of nothing happening. (Whether this is per donation or for all donations at once is irrelevant here) Should I go all in on either? Basic betting theory says no. I’ll have to mix. And I’ll have to mix according to my values, how much I care about the world in any given future and how risky I am willing to be.
Also, I am human. I am donating not because I am altruistic, but because it gives me a good feeling. Moreover, I can only decide for myself how to donate, not for all the world at once, so there’s that.
Edit: I’m happy you linked Yvain’s article because there is a point in it about investing that makes my argument even more complicated. But I’m even more happy because the point about investing the money is something I thought about before and have to research more, again.
I don’t actually value lives saved very much. Death’s just not that big a deal. I’m more interested in producing states of wonder and joy. I want to bring as many people as can to the level of eduction and self-awareness where they can appreciate the incredibleness of the world. The saddest thing that I can think of is that there are are hundreds of thousands of people who are just as smart as I am who were never given the opportunities or encouragement that I was to come to love the world. I’m in the business of poverty elevation and disease eradication, because removing those constraints allows us to maximize fully flourished human lives. Similarly, I care about a singularity because the amount of insight to which the species has access will go through the roof.
I use “live-saved” since, as you point out, it’s a sort of hot-word to which people react and associate with “doing good.”
This is my issue. I’m not sure what justification we have for ignoring the theory, assuming we actually want to be maximally helpful. Can you elaborate?
There is absolutely no justification in ignoring betting theory. It was formulated for turning money into more money, but applies equally well for turning any one cardinal quantity into another cardinal quantity. Some time ago there was an absurdly long article on here about why one should not diversify their donations assuming there is no risk, which made the point moot.
And even if there was no risk, my utility is marginal. I’ll donate some to one cause until that desire is satisfied, I’ll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.
I don’t understand.
and
Aren’t these mutually exclusive statements or am I misunderstanding? What is your position?
Diversify, that is my position.
Misunderstanding. Assuming risk, we have to diversify. But even when we assume no risk we exhibit marginal utility from any cause, so we should diversify there too, just as you don’t put all your money above poverty into any one good.
But the reason why I don’t put all my money into one good (that said, I’m pretty close, after food and rent, its just books, travel, and charity), because my utility function has built in diminishing marginal returns. I don’t get as much enjoyment out of doing somthign that I’ve already been doing a lot. If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend. While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.
Oh, you are sincerely concerned? Then of course any contribution you make to any efficient cause like world poverty will be virtually zero relative to the problem and spend away. But personally I can see people go “ten lives saved is good enough, let’s spend the rest on booze”. Further arguments could be made that it is unfair that only people in Africa get donations but not people in India, or similar.
But that is only the marginal argument knocked down. The risk argument still stands is way stronger anyway.
Signaling, signaling, signaling all the way down.
Ok. Fine maybe it’s signaling. I’m ok with that since the part of me that does really care thinks “if my desire to signal leads me to help effectively, then it’s fine in my book”, but then I’m fascinated because that part of me may, actually, be motivated by the my desire to signal my kindness. It may be signalling “all the way down”, but it seems to be alternating levels of signaling motivated by altruism motivated by signalling. Maybe it eventually stabilizes at one or the other.
I don’t care. Whether I’m doing it out of altruism or doing it for signaling (or, as I personally think, neither, but rather something more complex, involving my choice of personal identity which I suspect uses the neural architecture that was developed for playing status games, but has been generalized to be compared to an abstract ideal instead of other agent), I do want to be maximally effective.
If I know what my goals are, what motivates them is not of great consequence.