Does that mean that you think it’s more likely you can safely build a superintelligence and not remain in control?
What load is “and remain in control” carrying?
On edit: By the way, I actually do believe both that “control” is an extra design constraint that could push the problem over into impossibility, and that “control” is an actively bad goal that’s dangerous in itself. But it didn’t sound to me like you thought any scenarion involving losing control could be called “safe”, so I’m trying to tease out why you included the qualifier.
I would have done a lot worse than any of them.