causing s-risks to happen actually seems to be a pretty hard problem, at least one that requires far more nuanced models than merely extinction.
To maximize human suffering per unit of space-time, you need a good model of human values, just like a Friendly AI.
But to create astronomical amount of human suffering (without really maximizing it), you only need to fill astronomical amount of space-time with humans living in bad conditions, and prevent them from escaping those conditions. Relatively easier.
Instead of Thamiel, imagine immortal Pol Pot with space travel.
To maximize human suffering per unit of space-time, you need a good model of human values, just like a Friendly AI.
But to create astronomical amount of human suffering (without really maximizing it), you only need to fill astronomical amount of space-time with humans living in bad conditions, and prevent them from escaping those conditions. Relatively easier.
Instead of Thamiel, imagine immortal Pol Pot with space travel.
Ah, okay. Thanks for the clarification here.