One of the reasons why I’m skeptical of the S-Risk is as follows.
Not sure if it’s a core idea, but I’ve observed that S-Risk proponents often propagate the idea that some large amount of suffering is worse than death.
For example, some of them claim that assisted suicide for a patient in pain is ethical (the claim which I find abhorrent, unless the procedure is done for cryonics).
My view is, there NO fate worse than death. A single human death is worse than trillions of years of the worst possible suffering by trillions of people.
The “some suffering is worse than death” idea is increasing X-risks: one day, some sufficiently powerful idiot could decide that human extinction is better than an AGI-dystopia.
It’s a good idea to work on preventing large amounts of suffering, but S-Risk is a bad framework for that.
Putting the question of assisted suicide aside, I agree with what seems to be the core of this answer: The “value calculus” often used by utilitarians is a nice mathematical framework, but ultimately not a real thing (not saying that suffering isn’t a real thing or that one can’t gain useful knowledge from such calculations).
E.g. I would always trade an infinite amount of suffering for +epsilon control of the future and my current and future values don’t necessarily align. I don’t see how a strong form of utilitarianism can contend with such things.
I’d prefer to keep these things separate, i.e. (1) your moral preference that “a single human death is worse than trillions of years of the worst possible suffering by trillions of people” and (2) that there is a policy-level incentive problem that implies that we shouldn’t talk about s-risks because that might cause a powerful idiot to take unilateral action to increase x-risk.
I take it that statement 1 is a very rare preference. I, for one, would hate for it to be applied to me. I would gladly trade any health state that has a DALY disability weight > 0.05 or so for a reduction of my life span by the same duration. I’m not saying that you shouldn’t live forever, but I only want to if my well-being is sufficiently high (around or a bit higher than my current level).
Statement 2 is more worrying to me if taken at face value – but I’m actually not so worried about it in practice. What’s much more common is that people seek power for themselves. Some of them are very successful with it – Ozymandias, Cyrus the Great, Alexander the Great, Jesus, Trajan, … – but they are so much fewer than all the millions and millions of narcissistic egomaniacs that try. Our civilization seems to be pretty resilient against such power grabs.
Corollary: We should keep our civilization resilient. That’s equally important to me because I wouldn’t want someone to assume power and undemocratically condemn all of us to hell to eke out the awful kind of continued existence that comes with it.
One of the reasons why I’m skeptical of the S-Risk is as follows.
Not sure if it’s a core idea, but I’ve observed that S-Risk proponents often propagate the idea that some large amount of suffering is worse than death.
For example, some of them claim that assisted suicide for a patient in pain is ethical (the claim which I find abhorrent, unless the procedure is done for cryonics).
My view is, there NO fate worse than death. A single human death is worse than trillions of years of the worst possible suffering by trillions of people.
The “some suffering is worse than death” idea is increasing X-risks: one day, some sufficiently powerful idiot could decide that human extinction is better than an AGI-dystopia.
It’s a good idea to work on preventing large amounts of suffering, but S-Risk is a bad framework for that.
Putting the question of assisted suicide aside, I agree with what seems to be the core of this answer: The “value calculus” often used by utilitarians is a nice mathematical framework, but ultimately not a real thing (not saying that suffering isn’t a real thing or that one can’t gain useful knowledge from such calculations).
E.g. I would always trade an infinite amount of suffering for +epsilon control of the future and my current and future values don’t necessarily align. I don’t see how a strong form of utilitarianism can contend with such things.
I’d prefer to keep these things separate, i.e. (1) your moral preference that “a single human death is worse than trillions of years of the worst possible suffering by trillions of people” and (2) that there is a policy-level incentive problem that implies that we shouldn’t talk about s-risks because that might cause a powerful idiot to take unilateral action to increase x-risk.
I take it that statement 1 is a very rare preference. I, for one, would hate for it to be applied to me. I would gladly trade any health state that has a DALY disability weight > 0.05 or so for a reduction of my life span by the same duration. I’m not saying that you shouldn’t live forever, but I only want to if my well-being is sufficiently high (around or a bit higher than my current level).
Statement 2 is more worrying to me if taken at face value – but I’m actually not so worried about it in practice. What’s much more common is that people seek power for themselves. Some of them are very successful with it – Ozymandias, Cyrus the Great, Alexander the Great, Jesus, Trajan, … – but they are so much fewer than all the millions and millions of narcissistic egomaniacs that try. Our civilization seems to be pretty resilient against such power grabs.
Corollary: We should keep our civilization resilient. That’s equally important to me because I wouldn’t want someone to assume power and undemocratically condemn all of us to hell to eke out the awful kind of continued existence that comes with it.