That’s a fair point, I should have been more explicit.
My post is examining the risk conditional on the labs solving alignment well enough to keep the ASI under their control.
So yes, I agree that the primary risk is uncontrolled alignment failure.
I’m just pointing out that even if labs develop aligned superintelligence, we face a second risk: a global, perpetual monopoly on power.
That’s a fair point, I should have been more explicit.
My post is examining the risk conditional on the labs solving alignment well enough to keep the ASI under their control.
So yes, I agree that the primary risk is uncontrolled alignment failure.
I’m just pointing out that even if labs develop aligned superintelligence, we face a second risk: a global, perpetual monopoly on power.