Carl’s post sounded weird to me, because large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough). You could say that some possible minds are easier to please, but human utility doesn’t necessarily value such minds enough to counterbalance s-risk.
Brian’s post focuses more on possible suffering of insects or quarks. I don’t feel quite as morally uncertain about large amounts of human suffering, do you?
As to possible interventions, you have clearly thought about this for longer than me, so I’ll need time to sort things out. This is quite a shock.
large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough).
Carl gave a reason that future creatures, including potentially very human-like minds, might diverge from current humans in a way that makes hedonium much more efficient. If you assigned significant probability to that kind of scenario, it would quickly undermine your million-to-one ratio. Brian’s post briefly explains why you shouldn’t argue “If there is a 50% chance that x-risks are 2 million times worse, than they are a million times worse in expectation.” (I’d guess that there is a good chance, say > 25%, that good stuff can be as efficient as bad stuff.)
I would further say: existing creatures often prefer to keep living even given the possibility of extreme pain. This can be easily explained by an evolutionary story, which suffering-focused utilitarians tend to view as a debunking explanation: given that animals would prefer keep living regardless of the actual balance of pleasure and pain, we shouldn’t infer anything from that preference. But our strong dispreference for intense suffering has a similar evolutionary origin, and is no more reflective of underlying moral facts than is our strong preference for survival.
Paul, thank you for the substantive comment!
Carl’s post sounded weird to me, because large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough). You could say that some possible minds are easier to please, but human utility doesn’t necessarily value such minds enough to counterbalance s-risk.
Brian’s post focuses more on possible suffering of insects or quarks. I don’t feel quite as morally uncertain about large amounts of human suffering, do you?
As to possible interventions, you have clearly thought about this for longer than me, so I’ll need time to sort things out. This is quite a shock.
Carl gave a reason that future creatures, including potentially very human-like minds, might diverge from current humans in a way that makes hedonium much more efficient. If you assigned significant probability to that kind of scenario, it would quickly undermine your million-to-one ratio. Brian’s post briefly explains why you shouldn’t argue “If there is a 50% chance that x-risks are 2 million times worse, than they are a million times worse in expectation.” (I’d guess that there is a good chance, say > 25%, that good stuff can be as efficient as bad stuff.)
I would further say: existing creatures often prefer to keep living even given the possibility of extreme pain. This can be easily explained by an evolutionary story, which suffering-focused utilitarians tend to view as a debunking explanation: given that animals would prefer keep living regardless of the actual balance of pleasure and pain, we shouldn’t infer anything from that preference. But our strong dispreference for intense suffering has a similar evolutionary origin, and is no more reflective of underlying moral facts than is our strong preference for survival.