… man, now that the post has been downvoted a bunch I feel bad for leaving such a snarky answer. It’s a perfectly reasonable question, folks!
Overcompressed actual answer: core pieces of a standard doom-argument involve things like “killing all the humans will be very easy for a moderately-generally-smarter-than-human AI” and “killing all the humans (either as a subgoal or a side-effect of other things) is convergently instrumentally useful for the vast majority of terminal objectives”. A standard doom counterargument usually doesn’t dispute those two pieces (though there are of course exceptions); a standard doom counterargument usually argues that we’ll have ample opportunity to iterate, and therefore it doesn’t matter that the vast majority of terminal objectives instrumentally incentivize killing humans, we’ll iterate until we find ways to avoid that sort of thing.
The standard core disagreement is then mostly about the extent to which we’ll be able to iterate, or will in fact iterate in ways which actually help. In particular, cruxy subquestions tend to include:
How visible will “bad behavior” be early on? Will there be “warning shots”? Will we have ways to detect unwanted internal structures?
How sharply/suddenly will capabilities increase?
Insofar as problems are visible, will labs and/or governments actually respond in useful ways?
Militarization isn’t very centrally relevant to any of these; it’s mostly relevant to things which are mostly not in doubt anyways, at least in the medium-to-long term.
… man, now that the post has been downvoted a bunch I feel bad for leaving such a snarky answer. It’s a perfectly reasonable question, folks!
Overcompressed actual answer: core pieces of a standard doom-argument involve things like “killing all the humans will be very easy for a moderately-generally-smarter-than-human AI” and “killing all the humans (either as a subgoal or a side-effect of other things) is convergently instrumentally useful for the vast majority of terminal objectives”. A standard doom counterargument usually doesn’t dispute those two pieces (though there are of course exceptions); a standard doom counterargument usually argues that we’ll have ample opportunity to iterate, and therefore it doesn’t matter that the vast majority of terminal objectives instrumentally incentivize killing humans, we’ll iterate until we find ways to avoid that sort of thing.
The standard core disagreement is then mostly about the extent to which we’ll be able to iterate, or will in fact iterate in ways which actually help. In particular, cruxy subquestions tend to include:
How visible will “bad behavior” be early on? Will there be “warning shots”? Will we have ways to detect unwanted internal structures?
How sharply/suddenly will capabilities increase?
Insofar as problems are visible, will labs and/or governments actually respond in useful ways?
Militarization isn’t very centrally relevant to any of these; it’s mostly relevant to things which are mostly not in doubt anyways, at least in the medium-to-long term.