This is good, though it focuses on the extinction risk. Even without extinction, I think there is a high risk of gradual permanent disempowerment. Especially in cases where intent alignment is solved but value alignment isn’t. A simple argument here would replace the last bullet point above with this:
Putting autonomous ASI in a position of a bit more rather than a bit less control is advantageous for almost all tasks, because humans will be slow and ineffective compared to ASI.
This would plausibly lead us to end up handing all control to ASI, which would be irreversible.
This is bad if ASI if doesn’t necessarily have our overall best interest in mind.
Which are overall six points though. It should be shortened if possible.
This is good, though it focuses on the extinction risk. Even without extinction, I think there is a high risk of gradual permanent disempowerment. Especially in cases where intent alignment is solved but value alignment isn’t. A simple argument here would replace the last bullet point above with this:
Putting autonomous ASI in a position of a bit more rather than a bit less control is advantageous for almost all tasks, because humans will be slow and ineffective compared to ASI.
This would plausibly lead us to end up handing all control to ASI, which would be irreversible.
This is bad if ASI if doesn’t necessarily have our overall best interest in mind.
Which are overall six points though. It should be shortened if possible.