Due to complexity and fragility of human values, any superintelligence that fulfills them will probably be adjacent in design space to many other superintelligences that cause lots of suffering (which is also much cheaper), so a wrong superintelligence might take over due to human error or malice or arms races. That’s where most s-risk is coming from, I think. The one in a million number seems optimistic, actually.
Due to complexity and fragility of human values, any superintelligence that fulfills them will probably be adjacent in design space to many other superintelligences that cause lots of suffering (which is also much cheaper), so a wrong superintelligence might take over due to human error or malice or arms races. That’s where most s-risk is coming from, I think. The one in a million number seems optimistic, actually.