May I strongly recommend that you try to become a Dark Lord instead?
I mean, literally. Stage some small bloody civil war with expected body count of several millions, become dictator, provide everyone free insurance coverage for cryonics, it will be sure more ethical than 10% of chance of killing literally everyone from the perspective of most of ethical systems I know.
I don’t think staging a civil war is generally a good way of saving lives. Moreover, ordinary aging has about a 100% chance of “killing literally everyone” prematurely, so it’s unclear to me what moral distinction you’re trying to make in your comment. It’s possible you think that:
Death from aging is not as bad as death from AI because aging is natural whereas AI is artificial
Death from aging is not as bad as death from AI because human civilization would continue if everyone dies from aging, whereas it would not continue if AI kills everyone
In the case of (1) I’m not sure I share the intuition. Being forced to die from old age seems, if anything, worse than being forced to die from AI, since it is long and drawn-out, and presumably more painful than death from AI. You might also think about this dilemma in terms of act vs. omission, but I am not convinced there’s a clear asymmetry here.
In the case of (2), whether AI takeover is worse depends on how bad you think an “AI civilization” would be in the absence of humans. I recently wrote a post about some reasons to think that it wouldn’t be much worse than a human civilization.
In any case, I think this is simply a comparison between “everyone literally dies” vs. “everyone might literally die but in a different way”. So I don’t think it’s clear that pushing for one over the other makes someone a “Dark Lord”, in the morally relevant sense, compared to the alternative.
I think the perspective that you’re missing regarding 2. is that by building AGI one is taking the chance of non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
Even if one thinks it’s a better deal for them, a key point is that you are making the decision for them by unilaterally building AGI. So in that sense it is quite reasonable to see it as an “evil” action to work towards that outcome.
non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
I think this misrepresents the scenario since AGI presumably won’t just improve my own longevity: it will presumably improve most people’s longevity (assuming it does that at all), in addition to all the other benefits that AGI would provide the world. Also, both potential decisions are “unilateral”: if some group forcibly stops AGI development, they’re causing everyone else to non-consensually die from old age, by assumption.
I understand you have the intuition that there’s an important asymmetry here. However, even if that’s true, I think it’s important to strive to be accurate when describing the moral choice here.
I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn’t have much bearing here. (You’d still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
Ideally, of course, there’d be some sort of democratic process here that let’s people in aggregate make informed (!) choices. In the real world, it’s unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.
May I strongly recommend that you try to become a Dark Lord instead?
I mean, literally. Stage some small bloody civil war with expected body count of several millions, become dictator, provide everyone free insurance coverage for cryonics, it will be sure more ethical than 10% of chance of killing literally everyone from the perspective of most of ethical systems I know.
I don’t think staging a civil war is generally a good way of saving lives. Moreover, ordinary aging has about a 100% chance of “killing literally everyone” prematurely, so it’s unclear to me what moral distinction you’re trying to make in your comment. It’s possible you think that:
Death from aging is not as bad as death from AI because aging is natural whereas AI is artificial
Death from aging is not as bad as death from AI because human civilization would continue if everyone dies from aging, whereas it would not continue if AI kills everyone
In the case of (1) I’m not sure I share the intuition. Being forced to die from old age seems, if anything, worse than being forced to die from AI, since it is long and drawn-out, and presumably more painful than death from AI. You might also think about this dilemma in terms of act vs. omission, but I am not convinced there’s a clear asymmetry here.
In the case of (2), whether AI takeover is worse depends on how bad you think an “AI civilization” would be in the absence of humans. I recently wrote a post about some reasons to think that it wouldn’t be much worse than a human civilization.
In any case, I think this is simply a comparison between “everyone literally dies” vs. “everyone might literally die but in a different way”. So I don’t think it’s clear that pushing for one over the other makes someone a “Dark Lord”, in the morally relevant sense, compared to the alternative.
I think the perspective that you’re missing regarding 2. is that by building AGI one is taking the chance of non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
Even if one thinks it’s a better deal for them, a key point is that you are making the decision for them by unilaterally building AGI. So in that sense it is quite reasonable to see it as an “evil” action to work towards that outcome.
I think this misrepresents the scenario since AGI presumably won’t just improve my own longevity: it will presumably improve most people’s longevity (assuming it does that at all), in addition to all the other benefits that AGI would provide the world. Also, both potential decisions are “unilateral”: if some group forcibly stops AGI development, they’re causing everyone else to non-consensually die from old age, by assumption.
I understand you have the intuition that there’s an important asymmetry here. However, even if that’s true, I think it’s important to strive to be accurate when describing the moral choice here.
I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn’t have much bearing here. (You’d still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
Ideally, of course, there’d be some sort of democratic process here that let’s people in aggregate make informed (!) choices. In the real world, it’s unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.