Alternatively, one might construe the argument this way:
There will be AI++ (before too long, absent defeaters). [See Chalmers.]
If the goals of the AI++ differ significantly from the goals of human civilization, human civilization will be ruined soon after the arrival of AI++.
Without a massive effort the goals of the AI++ will differ significantly from the goals of human civilization.
Therefore, without a massive effort human civilization will be ruined soon after the arrival of AI++.
But this may be a less useful structure than the more detailed one you propose. My version simply packs more sub-arguments and discussion into each premise.
The premises (in your argument) that I feel least confident about are #1, #2, and #4.
Good work.
Alternatively, one might construe the argument this way:
There will be AI++ (before too long, absent defeaters). [See Chalmers.]
If the goals of the AI++ differ significantly from the goals of human civilization, human civilization will be ruined soon after the arrival of AI++.
Without a massive effort the goals of the AI++ will differ significantly from the goals of human civilization.
Therefore, without a massive effort human civilization will be ruined soon after the arrival of AI++.
But this may be a less useful structure than the more detailed one you propose. My version simply packs more sub-arguments and discussion into each premise.
The premises (in your argument) that I feel least confident about are #1, #2, and #4.
Premise #2 seems very likely to me. Can you provide me with reasons why it wouldn’t be likely?
Premise 2 in my version or utilitymonster’s version?
Sorry, utilitymonster’s version.