Unless you have crazy-long ASI timelines, you should choose life-saving interventions (e.g. AMF, New Incentives) over welfare-increasing interventions (e.g. GiveDirectly, Helen Keller International). This is because you expect that ASI will radically increase both longevity and welfare.
To illustrate, suppose we’re choosing how to donate $5000 and have two options:
(AMF) Save the life of a 5-year-old in Zambia who would otherwise die from malaria.
(GD) Improve the lives of five families in Kenya by sending each family one year’s salary ($1000).
Suppose that, before considering ASI, you are indifferent between (AMF) and (GD). The ASI consideration should then favour (AMF) because:
Before considering ASI, you are underestimating the benefit to the Zambian child. You are underestimating both how long they will live if they avoid malaria and how good their life will be.
Before considering ASI, you are overestimating the benefit to the Kenyan families. You are overestimating how large the next decade is as a proportion of their lives and how much you are improving their aggregate lifetime welfare.
I find this pretty intuitive, but you might find the mathematical model below helpful. Please let me know if you think I’m making either a mistake, either ethically or factually.
Mathematical model comparing life-saving vs welfare-increasing interventions
Mathematical setup
Assume a person-affecting axiology where how well a person’s life goes is logarithmic in their total lifetime welfare. Lifetime welfare is the integral of welfare over time. The benefit of an intervention is how much better their life goes: the difference in log-lifetime-welfare with and without the intervention.
Assume ordinary longevity is 80 years, ASI longevity is 1000 years, ordinary welfare is 1 unit/year, ASI welfare is 1000 units/year, and ASI arrives 50 years from now with probability p. Note that these numbers are completely made up—I think ASI longevity and ASI welfare are underestimates.
AMF: Saving the Zambian child
Consider the no-ASI scenario. Without intervention the child dies aged 5, so their lifetime welfare is 5. With intervention the child lives to 80, so their lifetime welfare is 80. The benefit is log(80) − log(5) = 2.77.
Consider the ASI scenario. Without intervention the child still dies aged 5, so their lifetime welfare is 5. With intervention the child lives to 1000, accumulating 50 years at welfare 1 and 950 years at welfare 1000, so their lifetime welfare is 50 + 950,000 = 950,050. The benefit is log(950,050) − log(5) = 12.15.
The expected benefit is (1−p) × 2.77 + p × 12.15.
GD: Cash transfers to Kenyan families
Assume 10 beneficiaries (five families, roughly 2 adults each). Each person will live regardless of the intervention; GD increases their welfare by 1 unit/year for the rest of their lives (or until ASI arrives, at which point ASI welfare dominates).
Consider the no-ASI scenario. Without intervention each person has lifetime welfare 80. With intervention each person has lifetime welfare 160. The benefit per person is log(160) − log(80) = 0.69.
Consider the ASI scenario. Without intervention each person has lifetime welfare 950,050. With intervention each person has lifetime welfare 950,100 (the extra 50 units from pre-ASI doubling). The benefit per person is log(950,100) − log(950,050) = 0.000053.
The expected benefit per person is (1−p) × 0.69 + p × 0.000053. The total expected benefit across 10 people is 10 times this.
Evaluation at different values of p:
At p = 0 (no ASI), the benefit of AMF is 2.77 and the benefit of GD is 10 × 0.69 = 6.93. GD is roughly 2.5x more valuable than AMF.
At p = 0.5, the expected benefit of AMF is 0.5 × 2.77 + 0.5 × 12.15 = 7.46. The expected benefit of GD is 10 × (0.5 × 0.69 + 0.5 × 0.000053) = 3.47. AMF is roughly twice as valuable as GD.
At p = 1 (ASI certain), the benefit of AMF is 12.15 and the benefit of GD is 10 × 0.000053 = 0.00053. AMF is roughly 23,000x more valuable than GD.
This is assuming ASI is positive expected lifespan.
(I think it’s a bit wonky where, in most worlds, I think ASI kills everyone, but, in some worlds, it does radically improve longevity, probably more than 1000 but where I think you need some time-discounting. I think this means it substantially reduces the median lifespan but might also substantially increase the meanlifespan. I’m not sure what to make of that and can imagine it basically working out to what you say here, but, I think does depend on your specific beliefs about that)
Hmm, yeah. I’m more hopeful than you, but I think I’d be moved by my argument even with a worldview like “80% extinction, 10% extreme longevity and welfare, 10% business as usual”. I know some people are doomier than that.
Also the timelines matter. If you have 1 year timelines with 99% extinction and 1% extreme longevity and welfare, then I think this still favours AMF over GD. Like, when I imagine myself in this scenario, and compare two benefits — “reduce my chance of dying of malaria in the next year from 10% to 0%”[1] and “double my personal consumption over the next year” — the former seems better.
IDK, I’m pretty uncertain. When I think about ASI in the next 10 years I feel urgency to keep people alive till then, because it would be such an L if someone died just before we achieved extreme longevity and welfare.
I don’t think it’s clear on longtermist grounds. Some possibilities:
If you think that the amount of resources used on mundane human welfare post-singualarity is constant, then adding the Zambian child to the population leads to a slight decrease in the lifespan of the rest of the population, so it’s zero-sum.
If you think that the amount of resources scales with population, then the child takes resources from the pool of resources which will be spent on stuff that isn’t mundane human welfare, so it might reduce the amount of Hedonium (if you care about that).
If you think that the lightcone will basically be spent on the CEV of the humans that exist around the singularity, you might worry that the marginal child’s vote will make the CEV worse.
(I’m not sure what my bottom line view is.)
In general, I worry that we’re basically clueless about the long-run consequences of most neartermist interventions.
Thanks for these considerations, I’ll ponder on them more later.
Here are my immediate thoughts:
If you think that the amount of resources used on mundane human welfare post-singualarity is constant, then adding the Zambian child to the population leads to a slight decrease in the lifespan of the rest of the population, so it’s zero-sum.
Hmm, this is true on impersonal ethics, in which the only moral consideration is maximising pleasurable person-moments. On such a view, you are morally neutral about killing 1000 infants and replacing them with people with the same welfare. But this violates common sense morality. And I think you should have some credence (under moral uncertainty) that this is bad.
If you think that the lightcone will basically be spent on the CEV of the humans that exist around the singularity, you might worry that the marginal child’s vote will make the CEV worse.
Hmm, this doesn’t seem clear-cut, certainly not enough to justify deviating so strongly from common-sense morality.
Just naively, it sounds crazy to me.
This consideration assumes that the child you save from malaria cares less about hedonium (or whatever weird thing EA’s care about) than the average person. However, you might naively expect that they will care more about hedonium because they actually owe their lives to EA whereas almost no one else does.
This consideration assumes that the CEV is weighted equally among all humans, rather than weighted by wealth. If you assume it’s weighted by wealth then the GiveDirectly donation has the same impact on CEV as the AMF donation.
This consideration predicts that someone is incentivised to kill as many people as possible just before the CEV procedure is executed. But a CEV procedure which incentivised people to murder would be terrible, so we wouldn’t run it. We are more likely to run a CEV procedure which rewards people for saving the lives of the participants of the CEV.
I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn’t mean yours doesn’t hold in any potential circumstances, but unclear to me that it’d be the dominant set of possible circumstances.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi ‘from scratch’. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
There are children alive right now. We should save them from dying of malaria even if we could ‘replace’ them with new happy people in the future. This consideration is even stronger because of ASI, which makes their potential future astronomically more valuable to them.
I don’t see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of—however long lived—lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
>unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans
Sure, I’m saying I have this bias.
This seems like commons sense morality to me: it would be bad (all else equal) to kill 1000 infants, even if their parents would respond by more children, such that the total population is unchanged.
Anyway, this is a pretty well-trod topic in ethics, and there isn’t much consensus, so the appropriate attitude is moral uncertainty. That is, you should act uncertain between person-affecting ethics (where killing and replacing infants is bad) and impersonal ethics (where killing and replacing infants is neutral).
Unless you have crazy-long ASI timelines, you should choose life-saving interventions (e.g. AMF, New Incentives) over welfare-increasing interventions (e.g. GiveDirectly, Helen Keller International). This is because you expect that ASI will radically increase both longevity and welfare.
To illustrate, suppose we’re choosing how to donate $5000 and have two options:
(AMF) Save the life of a 5-year-old in Zambia who would otherwise die from malaria.
(GD) Improve the lives of five families in Kenya by sending each family one year’s salary ($1000).
Suppose that, before considering ASI, you are indifferent between (AMF) and (GD). The ASI consideration should then favour (AMF) because:
Before considering ASI, you are underestimating the benefit to the Zambian child. You are underestimating both how long they will live if they avoid malaria and how good their life will be.
Before considering ASI, you are overestimating the benefit to the Kenyan families. You are overestimating how large the next decade is as a proportion of their lives and how much you are improving their aggregate lifetime welfare.
I find this pretty intuitive, but you might find the mathematical model below helpful. Please let me know if you think I’m making either a mistake, either ethically or factually.
Mathematical model comparing life-saving vs welfare-increasing interventions
Mathematical setup
Assume a person-affecting axiology where how well a person’s life goes is logarithmic in their total lifetime welfare. Lifetime welfare is the integral of welfare over time. The benefit of an intervention is how much better their life goes: the difference in log-lifetime-welfare with and without the intervention.
Assume ordinary longevity is 80 years, ASI longevity is 1000 years, ordinary welfare is 1 unit/year, ASI welfare is 1000 units/year, and ASI arrives 50 years from now with probability p. Note that these numbers are completely made up—I think ASI longevity and ASI welfare are underestimates.
AMF: Saving the Zambian child
Consider the no-ASI scenario. Without intervention the child dies aged 5, so their lifetime welfare is 5. With intervention the child lives to 80, so their lifetime welfare is 80. The benefit is log(80) − log(5) = 2.77.
Consider the ASI scenario. Without intervention the child still dies aged 5, so their lifetime welfare is 5. With intervention the child lives to 1000, accumulating 50 years at welfare 1 and 950 years at welfare 1000, so their lifetime welfare is 50 + 950,000 = 950,050. The benefit is log(950,050) − log(5) = 12.15.
The expected benefit is (1−p) × 2.77 + p × 12.15.
GD: Cash transfers to Kenyan families
Assume 10 beneficiaries (five families, roughly 2 adults each). Each person will live regardless of the intervention; GD increases their welfare by 1 unit/year for the rest of their lives (or until ASI arrives, at which point ASI welfare dominates).
Consider the no-ASI scenario. Without intervention each person has lifetime welfare 80. With intervention each person has lifetime welfare 160. The benefit per person is log(160) − log(80) = 0.69.
Consider the ASI scenario. Without intervention each person has lifetime welfare 950,050. With intervention each person has lifetime welfare 950,100 (the extra 50 units from pre-ASI doubling). The benefit per person is log(950,100) − log(950,050) = 0.000053.
The expected benefit per person is (1−p) × 0.69 + p × 0.000053. The total expected benefit across 10 people is 10 times this.
Evaluation at different values of p:
At p = 0 (no ASI), the benefit of AMF is 2.77 and the benefit of GD is 10 × 0.69 = 6.93. GD is roughly 2.5x more valuable than AMF.
At p = 0.5, the expected benefit of AMF is 0.5 × 2.77 + 0.5 × 12.15 = 7.46. The expected benefit of GD is 10 × (0.5 × 0.69 + 0.5 × 0.000053) = 3.47. AMF is roughly twice as valuable as GD.
At p = 1 (ASI certain), the benefit of AMF is 12.15 and the benefit of GD is 10 × 0.000053 = 0.00053. AMF is roughly 23,000x more valuable than GD.
This is assuming ASI is positive expected lifespan.
(I think it’s a bit wonky where, in most worlds, I think ASI kills everyone, but, in some worlds, it does radically improve longevity, probably more than 1000 but where I think you need some time-discounting. I think this means it substantially reduces the median lifespan but might also substantially increase the mean lifespan. I’m not sure what to make of that and can imagine it basically working out to what you say here, but, I think does depend on your specific beliefs about that)
Hmm, yeah. I’m more hopeful than you, but I think I’d be moved by my argument even with a worldview like “80% extinction, 10% extreme longevity and welfare, 10% business as usual”. I know some people are doomier than that.
Also the timelines matter. If you have 1 year timelines with 99% extinction and 1% extreme longevity and welfare, then I think this still favours AMF over GD. Like, when I imagine myself in this scenario, and compare two benefits — “reduce my chance of dying of malaria in the next year from 10% to 0%”[1] and “double my personal consumption over the next year” — the former seems better.
IDK, I’m pretty uncertain. When I think about ASI in the next 10 years I feel urgency to keep people alive till then, because it would be such an L if someone died just before we achieved extreme longevity and welfare.
I consider 10% not 100% because AMF has a tenth the beneficiaries as GD.
I don’t think it’s clear on longtermist grounds. Some possibilities:
If you think that the amount of resources used on mundane human welfare post-singualarity is constant, then adding the Zambian child to the population leads to a slight decrease in the lifespan of the rest of the population, so it’s zero-sum.
If you think that the amount of resources scales with population, then the child takes resources from the pool of resources which will be spent on stuff that isn’t mundane human welfare, so it might reduce the amount of Hedonium (if you care about that).
If you think that the lightcone will basically be spent on the CEV of the humans that exist around the singularity, you might worry that the marginal child’s vote will make the CEV worse.
(I’m not sure what my bottom line view is.)
In general, I worry that we’re basically clueless about the long-run consequences of most neartermist interventions.
Thanks for these considerations, I’ll ponder on them more later.
Here are my immediate thoughts:
Hmm, this is true on impersonal ethics, in which the only moral consideration is maximising pleasurable person-moments. On such a view, you are morally neutral about killing 1000 infants and replacing them with people with the same welfare. But this violates common sense morality. And I think you should have some credence (under moral uncertainty) that this is bad.
Hmm, this doesn’t seem clear-cut, certainly not enough to justify deviating so strongly from common-sense morality.
Just naively, it sounds crazy to me.
This consideration assumes that the child you save from malaria cares less about hedonium (or whatever weird thing EA’s care about) than the average person. However, you might naively expect that they will care more about hedonium because they actually owe their lives to EA whereas almost no one else does.
This consideration assumes that the CEV is weighted equally among all humans, rather than weighted by wealth. If you assume it’s weighted by wealth then the GiveDirectly donation has the same impact on CEV as the AMF donation.
This consideration predicts that someone is incentivised to kill as many people as possible just before the CEV procedure is executed. But a CEV procedure which incentivised people to murder would be terrible, so we wouldn’t run it. We are more likely to run a CEV procedure which rewards people for saving the lives of the participants of the CEV.
This is a great point. Thanks for making it.
I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn’t mean yours doesn’t hold in any potential circumstances, but unclear to me that it’d be the dominant set of possible circumstances.
I don’t just want to maximise happiness, I also want to benefit people. For maximising happiness (and other impersonal values) you should maybe do:
Increase probability of survival:
Lightcone Infrastructure
Various political donations
Increase expected longterm value conditional on survival:
Forethought
Center for Longterm Risk
I don’t donate to maximise impersonal happiness, because I think it’s better to for me to save money so I have more flexibility in my work.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi ‘from scratch’. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
There are children alive right now. We should save them from dying of malaria even if we could ‘replace’ them with new happy people in the future. This consideration is even stronger because of ASI, which makes their potential future astronomically more valuable to them.
I don’t see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of—however long lived—lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
>unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans
Sure, I’m saying I have this bias.
This seems like commons sense morality to me: it would be bad (all else equal) to kill 1000 infants, even if their parents would respond by more children, such that the total population is unchanged.
Anyway, this is a pretty well-trod topic in ethics, and there isn’t much consensus, so the appropriate attitude is moral uncertainty. That is, you should act uncertain between person-affecting ethics (where killing and replacing infants is bad) and impersonal ethics (where killing and replacing infants is neutral).