Apparent moral alien here. Hi! I’m pretty astonished how many people apparently believe something like that statement. In my world, it’s an off-the-deep-end perverse and sociopathic sort of value to hold. Of course, it’s consistent. I can work with sociopaths, because to a first approximation I can work with anyone. But if there’s enough delay, cooperation is going to become strained, because I’m quite happy with me dying, and you dying, and your loved ones dying (ideally painlessly having lived fulfilling lives!), if it turns out to be necessary in order for there to be a valuable future at all.
Despite introducing myself as an alien, I currently think most humans don’t espouse your statement, because most humans’ moral circle is big enough to preclude it. Other humans reject it on the basis of philosophical open/empty individualism. Others reject it for superstitious reasons or due to ethical injunctions. Others still for the deceptive reasons you mentioned. There are basically a lot of reasons either not buy it, or be quiet about it if you do!
As an aside, it appears that in certain circles, the deceptive/prudent reasons you mention to not espouse this view are inverted: a sort of neoliberal/objectivist-celebratory crowd would tend to praise this sort of thing.
A separate aside: there is a strong implicit assumption that AGI is a necessary prerequisite to functional immortality, which isn’t at all obvious to me. I think it’s an insufficiently-justified received wisdom from the days of sloppier, lower-stakes futurism.
Finally, I’m much less confident than you that a locked-in pause is likely happening!
I thought your analysis was generally pretty good otherwise.
I think any human with time-consistent preferences prefers A to B for some margin? The question is how much margin.
Don’t understand your abortion analogy at all I’m afraid.
I did introduce myself as ‘apparent moral alien’! - though for the reasons in the rest of my comment I don’t think I’m all that rare. Until quite recently I’d have been confident I was in a supermajority, but I’m less sure of that now, and I weakly entertain the hypothesis it’s a minority.
Yet human routinely sacrifice their own lives for the good of other (see : firefighters, soldiers, high mountain emergency rescuers, etc.). The X-risk argument is more abstract but basically the same.
A lot of our moral frameworks breakdown once immortality is a real choice. Sacrificing your own life for the own good can be reframed as dying a little earlier.
Many of these people go in knowing there’s a small chance of death. A lot of them would probably change their minds if it was a suicide mission (except a minority).
If the 5 year mortality rate of firefighting was 100%, how many would still do that job?
I’m quite happy with … you dying, and your loved ones dying
It’s good of you to say that so plainly. I’ll return the favor and say that I’d run a substantial risk of death if it meant getting rid of you and people like you, to make the world a safer place for my loved ones.
Are you aware I’m a full time AI safety researcher? I don’t think you want to ‘get rid’ of us. Perhaps if you could politically silence me or intimidate me while having me carry on technical work? Naturally I don’t endorse this plan even for highly fanatical egoists.
Separately, even if I were just my political influence (along with my reference class), I (tentatively) don’t believe you’re as fanatical as your comment claims.
Apparent moral alien here. Hi! I’m pretty astonished how many people apparently believe something like that statement. In my world, it’s an off-the-deep-end perverse and sociopathic sort of value to hold. Of course, it’s consistent. I can work with sociopaths, because to a first approximation I can work with anyone. But if there’s enough delay, cooperation is going to become strained, because I’m quite happy with me dying, and you dying, and your loved ones dying (ideally painlessly having lived fulfilling lives!), if it turns out to be necessary in order for there to be a valuable future at all.
Despite introducing myself as an alien, I currently think most humans don’t espouse your statement, because most humans’ moral circle is big enough to preclude it. Other humans reject it on the basis of philosophical open/empty individualism. Others reject it for superstitious reasons or due to ethical injunctions. Others still for the deceptive reasons you mentioned. There are basically a lot of reasons either not buy it, or be quiet about it if you do!
As an aside, it appears that in certain circles, the deceptive/prudent reasons you mention to not espouse this view are inverted: a sort of neoliberal/objectivist-celebratory crowd would tend to praise this sort of thing.
A separate aside: there is a strong implicit assumption that AGI is a necessary prerequisite to functional immortality, which isn’t at all obvious to me. I think it’s an insufficiently-justified received wisdom from the days of sloppier, lower-stakes futurism.
Finally, I’m much less confident than you that a locked-in pause is likely happening!
I thought your analysis was generally pretty good otherwise.
Many humans, given a choice between
A) they and their loved ones (actually everyone on earth) will live forever with an X-risk p
B) this happens after they and everyone they love dies with an X-risk less than p
Would choose A.
Abortion has a sort of similar parallel but with economic risk instead of X risk, and obviously no immortality yet many are pro choice.
I think valuing the lives of future humans you don’t know of over the lives of yourselves and your loved ones is the alien choice here.
I think any human with time-consistent preferences prefers A to B for some margin? The question is how much margin.
Don’t understand your abortion analogy at all I’m afraid.
I did introduce myself as ‘apparent moral alien’! - though for the reasons in the rest of my comment I don’t think I’m all that rare. Until quite recently I’d have been confident I was in a supermajority, but I’m less sure of that now, and I weakly entertain the hypothesis it’s a minority.
You are ending a potential future life in exchange for preserving a quality of present life.
Yet human routinely sacrifice their own lives for the good of other (see : firefighters, soldiers, high mountain emergency rescuers, etc.). The X-risk argument is more abstract but basically the same.
A lot of our moral frameworks breakdown once immortality is a real choice. Sacrificing your own life for the own good can be reframed as dying a little earlier.
Many of these people go in knowing there’s a small chance of death. A lot of them would probably change their minds if it was a suicide mission (except a minority).
If the 5 year mortality rate of firefighting was 100%, how many would still do that job?
It’s good of you to say that so plainly. I’ll return the favor and say that I’d run a substantial risk of death if it meant getting rid of you and people like you, to make the world a safer place for my loved ones.
Are you aware I’m a full time AI safety researcher? I don’t think you want to ‘get rid’ of us. Perhaps if you could politically silence me or intimidate me while having me carry on technical work? Naturally I don’t endorse this plan even for highly fanatical egoists.
Separately, even if I were just my political influence (along with my reference class), I (tentatively) don’t believe you’re as fanatical as your comment claims.