An AI might do a reasonable thing to pursue a reasonable goal, but be wrong. That’s the sort of thing you’d expect a human to do now and then, and an AI might be less likely to do that than a human. Considering the amount of force an AI can apply, we should probably be more worried than we are about AIs which are just plain making mistakes.
However, the big concern here is that an AI can go wrong because humans try to specify a goal for it, but don’t think it through adequately. For example (and hardly the worst), the AI is protecting humans, but human is defined so narrowly that just about any attempt at self-improvement is frustrated.
Or (and I consider this a very likely failure mode), the AI is developed by an organization and the goal is to improve the profit and/or power of the organization. This doesn’t even need to be your least favorite organization for things to go very wrong.
If you’d like a fictional handling of the problem, try The Jagged Orbit by John Brunner.
What a wonderfully compact analysis. I’ll have to check out The Jagged Orbit.
As for an AI promoting an organization’s interests over the interests of humanity—I consider it likely that our conversations won’t be able to prevent this from happening. But it certainly seems important enough that discussion is warranted.
An AI might do a reasonable thing to pursue a reasonable goal, but be wrong. That’s the sort of thing you’d expect a human to do now and then, and an AI might be less likely to do that than a human. Considering the amount of force an AI can apply, we should probably be more worried than we are about AIs which are just plain making mistakes.
However, the big concern here is that an AI can go wrong because humans try to specify a goal for it, but don’t think it through adequately. For example (and hardly the worst), the AI is protecting humans, but human is defined so narrowly that just about any attempt at self-improvement is frustrated.
Or (and I consider this a very likely failure mode), the AI is developed by an organization and the goal is to improve the profit and/or power of the organization. This doesn’t even need to be your least favorite organization for things to go very wrong.
If you’d like a fictional handling of the problem, try The Jagged Orbit by John Brunner.
What a wonderfully compact analysis. I’ll have to check out The Jagged Orbit.
As for an AI promoting an organization’s interests over the interests of humanity—I consider it likely that our conversations won’t be able to prevent this from happening. But it certainly seems important enough that discussion is warranted.