I don’t buy the “million times worse,” at least not if we talk about the relevant E(s-risk moral value) / E(x-risk moral value) rather than the irrelevant E(s-risk moral value / x-risk moral value). See this post by Carl and this post by Brian. I think that responsible use of moral uncertainty will tend to push you away from this kind of fanatical view
I agree that if you are million-to-1 then you should be predominantly concerned with s-risk, I think they are somewhat improbable/intractable but not that improbable+intractable. I’d guess the probability is ~100x lower, and the available object-level interventions are perhaps 10x less effective. The particular scenarios discussed here seem unlikely to lead to optimized suffering, only “conflict” and ”???” really make any sense to me. Even on the negative utilitarian view, it seems like you shouldn’t care about anything other than optimized suffering.
The best object-level intervention I can think of is reducing our civilization’s expected vulnerability to extortion, which seems poorly-leveraged relative to alignment because it is much less time-sensitive (unless we fail at alignment and so end up committing to a particular and probably mistaken decision-theoretic perspective). From the perspective of s-riskers, it’s possible that spreading strong emotional commitments to extortion-resistance (e.g. along the lines of UDT or this heuristic) looks somewhat better than spreading concern for suffering.
The meta-level intervention of “think about s-risk and understand it better / look for new interventions” seems much more attractive than any object-level interventions we yet know, and probably worth investing some resources in even if you take a more normal suffering vs. pleasure tradeoff. If this is the best intervention and is much more likely to be implemented by people who endorse suffering-focused ethical views, it may be the strongest incentive to spread suffering-focused views. I think that higher adoption of suffering-focused views is relatively bad for people with a more traditional suffering vs. pleasure tradeoff, so this is something I’d like to avoid (especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation). Ironically, that gives some extra reason for conventional EAs to think about s-risk, so that the suffering-focused EAs have less incentive to focus on value-spreading. This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world. (Though here the non-s-riskers should also err on the side of extortion-resistance, e.g. trading with the position of rational non-extorting s-riskers rather than whatever views/plans the s-riskers happen to have.)
An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the latter, then x-risk and s-risk reduction may end up being aligned. If the former, then at best the s-riskers are indifferent to survival and need to resort to more speculative interventions. Interestingly, in this case it may also be counterproductive for s-riskers to expand their influence or acquire resources. My guess is that mature suffering-hating civilizations reduce s-risk, since immature suffering-hating civilizations probably provide a significant part of the game-theoretic incentive yet have almost no influence, and sane suffering-hating civilizations will provide minimal additional incentives to create suffering. But I haven’t thought about this issue very much.
Carl’s post sounded weird to me, because large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough). You could say that some possible minds are easier to please, but human utility doesn’t necessarily value such minds enough to counterbalance s-risk.
Brian’s post focuses more on possible suffering of insects or quarks. I don’t feel quite as morally uncertain about large amounts of human suffering, do you?
As to possible interventions, you have clearly thought about this for longer than me, so I’ll need time to sort things out. This is quite a shock.
large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough).
Carl gave a reason that future creatures, including potentially very human-like minds, might diverge from current humans in a way that makes hedonium much more efficient. If you assigned significant probability to that kind of scenario, it would quickly undermine your million-to-one ratio. Brian’s post briefly explains why you shouldn’t argue “If there is a 50% chance that x-risks are 2 million times worse, than they are a million times worse in expectation.” (I’d guess that there is a good chance, say > 25%, that good stuff can be as efficient as bad stuff.)
I would further say: existing creatures often prefer to keep living even given the possibility of extreme pain. This can be easily explained by an evolutionary story, which suffering-focused utilitarians tend to view as a debunking explanation: given that animals would prefer keep living regardless of the actual balance of pleasure and pain, we shouldn’t infer anything from that preference. But our strong dispreference for intense suffering has a similar evolutionary origin, and is no more reflective of underlying moral facts than is our strong preference for survival.
and suffering-focused EAs do less stuff that tends to lead to the destruction of the world.
In support of this, my system 1 reports that if it sees more intelligent people taking S-risk seriously it is less likely to nuke the planet if it gets the chance. (I’m not sure I endorse nuking the planet, just reporting emotional reaction).
especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation
Can you elaborate on what you mean by this? People like Brian or others at FRI don’t seem particularly averse to philosophical deliberation to me...
This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world.
I support this compromise and agree not to destroy the world. :-)
Those of us who sympathize with suffering-focused ethics have an incentive to encourage others to think about their values now, at least in crudely enough terms to take a stance on prioritizing preventing s-risks vs. making sure we get to a position where everyone can safely deliberate their values further and then everything gets fulfilled. Conversely, if one (normatively!) thinks the downsides of bad futures are unlikely to be much worse than the upsides of good futures, then one is incentivized to promote caution about taking confident stances on anything population-ethics-related, and instead value deeper philosophical reflection. The latter also has the upside of being good from a cooperation point of view: Everyone can work on the same priority (building safe AI that helps with philosophical reflection) regardless of one’s inklings about how personal value extrapolation is likely to turn out.
(The situation becomes more interesting/complicated for suffering-focused altruists once we add considerations of multiverse-wide compromise via coordinated decision-making, which, in extreme versions at least, would call for being “updateless” about the direction of one’s own values.)
Can you elaborate on what you mean by this? People like Brian or others at FRI don’t seem particularly averse to philosophical deliberation to me...
People vary in what kinds of values change they would consider drift vs. endorsed deliberation. Brian has in the past publicly come down unusually far on the side of “change = drift,” I’ve encountered similar views on one other occasion from this crowd, and I had heard second hand that this was relatively common.
Brian or someone more familiar with his views could speak more authoritatively to that aspect of the question, and I might be mistaken about the views of the suffering-focused utilitarians more broadly.
An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the former, then x-risk and s-risk reduction may end up being aligned.
Did you mean to say, “if the latter” (such that x-risk and s-risk reduction are aligned when suffering-hating civilizations decrease s-risk), rather than “if the former”?
I don’t buy the “million times worse,” at least not if we talk about the relevant E(s-risk moral value) / E(x-risk moral value) rather than the irrelevant E(s-risk moral value / x-risk moral value). See this post by Carl and this post by Brian. I think that responsible use of moral uncertainty will tend to push you away from this kind of fanatical view
I agree that if you are million-to-1 then you should be predominantly concerned with s-risk, I think they are somewhat improbable/intractable but not that improbable+intractable. I’d guess the probability is ~100x lower, and the available object-level interventions are perhaps 10x less effective. The particular scenarios discussed here seem unlikely to lead to optimized suffering, only “conflict” and ”???” really make any sense to me. Even on the negative utilitarian view, it seems like you shouldn’t care about anything other than optimized suffering.
The best object-level intervention I can think of is reducing our civilization’s expected vulnerability to extortion, which seems poorly-leveraged relative to alignment because it is much less time-sensitive (unless we fail at alignment and so end up committing to a particular and probably mistaken decision-theoretic perspective). From the perspective of s-riskers, it’s possible that spreading strong emotional commitments to extortion-resistance (e.g. along the lines of UDT or this heuristic) looks somewhat better than spreading concern for suffering.
The meta-level intervention of “think about s-risk and understand it better / look for new interventions” seems much more attractive than any object-level interventions we yet know, and probably worth investing some resources in even if you take a more normal suffering vs. pleasure tradeoff. If this is the best intervention and is much more likely to be implemented by people who endorse suffering-focused ethical views, it may be the strongest incentive to spread suffering-focused views. I think that higher adoption of suffering-focused views is relatively bad for people with a more traditional suffering vs. pleasure tradeoff, so this is something I’d like to avoid (especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation). Ironically, that gives some extra reason for conventional EAs to think about s-risk, so that the suffering-focused EAs have less incentive to focus on value-spreading. This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world. (Though here the non-s-riskers should also err on the side of extortion-resistance, e.g. trading with the position of rational non-extorting s-riskers rather than whatever views/plans the s-riskers happen to have.)
An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the latter, then x-risk and s-risk reduction may end up being aligned. If the former, then at best the s-riskers are indifferent to survival and need to resort to more speculative interventions. Interestingly, in this case it may also be counterproductive for s-riskers to expand their influence or acquire resources. My guess is that mature suffering-hating civilizations reduce s-risk, since immature suffering-hating civilizations probably provide a significant part of the game-theoretic incentive yet have almost no influence, and sane suffering-hating civilizations will provide minimal additional incentives to create suffering. But I haven’t thought about this issue very much.
Paul, thank you for the substantive comment!
Carl’s post sounded weird to me, because large amounts of human utility (more than just pleasure) seem harder to achieve than large amounts of human disutility (for which pain is enough). You could say that some possible minds are easier to please, but human utility doesn’t necessarily value such minds enough to counterbalance s-risk.
Brian’s post focuses more on possible suffering of insects or quarks. I don’t feel quite as morally uncertain about large amounts of human suffering, do you?
As to possible interventions, you have clearly thought about this for longer than me, so I’ll need time to sort things out. This is quite a shock.
Carl gave a reason that future creatures, including potentially very human-like minds, might diverge from current humans in a way that makes hedonium much more efficient. If you assigned significant probability to that kind of scenario, it would quickly undermine your million-to-one ratio. Brian’s post briefly explains why you shouldn’t argue “If there is a 50% chance that x-risks are 2 million times worse, than they are a million times worse in expectation.” (I’d guess that there is a good chance, say > 25%, that good stuff can be as efficient as bad stuff.)
I would further say: existing creatures often prefer to keep living even given the possibility of extreme pain. This can be easily explained by an evolutionary story, which suffering-focused utilitarians tend to view as a debunking explanation: given that animals would prefer keep living regardless of the actual balance of pleasure and pain, we shouldn’t infer anything from that preference. But our strong dispreference for intense suffering has a similar evolutionary origin, and is no more reflective of underlying moral facts than is our strong preference for survival.
In support of this, my system 1 reports that if it sees more intelligent people taking S-risk seriously it is less likely to nuke the planet if it gets the chance. (I’m not sure I endorse nuking the planet, just reporting emotional reaction).
Can you elaborate on what you mean by this? People like Brian or others at FRI don’t seem particularly averse to philosophical deliberation to me...
I support this compromise and agree not to destroy the world. :-)
Those of us who sympathize with suffering-focused ethics have an incentive to encourage others to think about their values now, at least in crudely enough terms to take a stance on prioritizing preventing s-risks vs. making sure we get to a position where everyone can safely deliberate their values further and then everything gets fulfilled. Conversely, if one (normatively!) thinks the downsides of bad futures are unlikely to be much worse than the upsides of good futures, then one is incentivized to promote caution about taking confident stances on anything population-ethics-related, and instead value deeper philosophical reflection. The latter also has the upside of being good from a cooperation point of view: Everyone can work on the same priority (building safe AI that helps with philosophical reflection) regardless of one’s inklings about how personal value extrapolation is likely to turn out.
(The situation becomes more interesting/complicated for suffering-focused altruists once we add considerations of multiverse-wide compromise via coordinated decision-making, which, in extreme versions at least, would call for being “updateless” about the direction of one’s own values.)
People vary in what kinds of values change they would consider drift vs. endorsed deliberation. Brian has in the past publicly come down unusually far on the side of “change = drift,” I’ve encountered similar views on one other occasion from this crowd, and I had heard second hand that this was relatively common.
Brian or someone more familiar with his views could speak more authoritatively to that aspect of the question, and I might be mistaken about the views of the suffering-focused utilitarians more broadly.
Did you mean to say, “if the latter” (such that x-risk and s-risk reduction are aligned when suffering-hating civilizations decrease s-risk), rather than “if the former”?