That’d be a good argument that explicitly malicious AI is technically simpler than Friendly AI, but technical complexity isn’t the only constraint on the likelihood of AI of a particular type arising. I’d consider it extremely unlikely that any development team would choose to inculcate a generally malicious value system in their charges; the AI research community is, fortunately, not made up of Bond villains. It doesn’t even work as a mutually-assured-destruction ploy, since the threat isn’t widely recognized.
Situational malice seems more plausible (in military applications, for example), but I’d call that a special case of ordinary unFriendliness.
That’d be a good argument that explicitly malicious AI is technically simpler than Friendly AI, but technical complexity isn’t the only constraint on the likelihood of AI of a particular type arising. I’d consider it extremely unlikely that any development team would choose to inculcate a generally malicious value system in their charges; the AI research community is, fortunately, not made up of Bond villains. It doesn’t even work as a mutually-assured-destruction ploy, since the threat isn’t widely recognized.
Situational malice seems more plausible (in military applications, for example), but I’d call that a special case of ordinary unFriendliness.
I could easily see military application + bug in the safeguards ⇒ malicious AI.
Not as likely as ordinary unfriendliness, I think, but certainly plausible.