The country or countries that first develop superintelligence will make sure others cannot follow,
You seem to think that superintelligence, however defined, will by default be taking orders from meatbags, or at least care about the meatbags’ internal political divisions. That’s kind of heterodox on here. Why do you think that?
You seem to think that superintelligence, however defined, will by default be taking orders from meatbags, or at least care about the meatbags’ internal political divisions. That’s kind of heterodox on here. Why do you think that?
That’s a fair point, I should have been more explicit.
My post is examining the risk conditional on the labs solving alignment well enough to keep the ASI under their control.
So yes, I agree that the primary risk is uncontrolled alignment failure.
I’m just pointing out that even if labs develop aligned superintelligence, we face a second risk: a global, perpetual monopoly on power.