I think Noah Carl was coping with the “downsides” he listed. Loss of meaning and loss of status are complete jokes. They are the problems of people who don’t have problems. I would even argue that focusing on X-risks rather than S-risks is a bigger form of cope than denying AI is intelligent at all. I don’t see how you train a superintelligent military AI that doesn’t come to the conclusion that killing your enemies vastly limits the amount of suffering you can inflict upon them.
Edit: I think loss of actual meaning, like conclusive proof we’re in a dysteleology, would not be a joke. But I think that loss of meaning in the sense of “what am I going to do if I can’t win at agent competition anymore :(” feels like a very first-world problem.
I think Noah Carl was coping with the “downsides” he listed. Loss of meaning and loss of status are complete jokes. They are the problems of people who don’t have problems. I would even argue that focusing on X-risks rather than S-risks is a bigger form of cope than denying AI is intelligent at all. I don’t see how you train a superintelligent military AI that doesn’t come to the conclusion that killing your enemies vastly limits the amount of suffering you can inflict upon them.
Edit: I think loss of actual meaning, like conclusive proof we’re in a dysteleology, would not be a joke. But I think that loss of meaning in the sense of “what am I going to do if I can’t win at agent competition anymore :(” feels like a very first-world problem.
Victory is the aim of war, not suffering.