Mostly the first reason. The “made of atoms that can be used for something else” piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.
There’s a wide variance in how “suffering” is perceived, weighted, and (dis)valued, and no known resolution to different intuitions about it.
There’s no real agreement on what S-risks even are, and whether they’re anything but a tiny subset of other X-risks.
Many people care less about (others) suffering than they do about positive-valence experience (of others). This may or may not be related to the fact that suffering is generally low-status and satisfaction/meaning is high-status.
S-risks are barely discussed in LW, is that because:
People think they are so improbable that it’s not worth mentioning.
People are scared to discuss them.
Avoiding creating hypersititous textual attractors
Other reasons?
See https://web.archive.org/web/20230505191204/https://www.lesswrong.com/posts/5Jmhdun9crJGAJGyy/why-are-we-so-complacent-about-ai-hell for longer previous discussion on it.
Mostly the first reason. The “made of atoms that can be used for something else” piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.
There’s a wide variance in how “suffering” is perceived, weighted, and (dis)valued, and no known resolution to different intuitions about it.
There’s no real agreement on what S-risks even are, and whether they’re anything but a tiny subset of other X-risks.
Many people care less about (others) suffering than they do about positive-valence experience (of others). This may or may not be related to the fact that suffering is generally low-status and satisfaction/meaning is high-status.