I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn’t mean yours doesn’t hold in any potential circumstances, but unclear to me that it’d be the dominant set of possible circumstances.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi ‘from scratch’. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
There are children alive right now. We should save them from dying of malaria even if we could ‘replace’ them with new happy people in the future. This consideration is even stronger because of ASI, which makes their potential future astronomically more valuable to them.
I don’t see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of—however long lived—lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
>unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans
Sure, I’m saying I have this bias.
This seems like commons sense morality to me: it would be bad (all else equal) to kill 1000 infants, even if their parents would respond by more children, such that the total population is unchanged.
Anyway, this is a pretty well-trod topic in ethics, and there isn’t much consensus, so the appropriate attitude is moral uncertainty. That is, you should act uncertain between person-affecting ethics (where killing and replacing infants is bad) and impersonal ethics (where killing and replacing infants is neutral).
I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn’t mean yours doesn’t hold in any potential circumstances, but unclear to me that it’d be the dominant set of possible circumstances.
I don’t just want to maximise happiness, I also want to benefit people. For maximising happiness (and other impersonal values) you should maybe do:
Increase probability of survival:
Lightcone Infrastructure
Various political donations
Increase expected longterm value conditional on survival:
Forethought
Center for Longterm Risk
I don’t donate to maximise impersonal happiness, because I think it’s better to for me to save money so I have more flexibility in my work.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi ‘from scratch’. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
There are children alive right now. We should save them from dying of malaria even if we could ‘replace’ them with new happy people in the future. This consideration is even stronger because of ASI, which makes their potential future astronomically more valuable to them.
I don’t see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of—however long lived—lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
>unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans
Sure, I’m saying I have this bias.
This seems like commons sense morality to me: it would be bad (all else equal) to kill 1000 infants, even if their parents would respond by more children, such that the total population is unchanged.
Anyway, this is a pretty well-trod topic in ethics, and there isn’t much consensus, so the appropriate attitude is moral uncertainty. That is, you should act uncertain between person-affecting ethics (where killing and replacing infants is bad) and impersonal ethics (where killing and replacing infants is neutral).