Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don’t think it’s a likely future.
“unFriendly” doesn’t mean “evil”, just “not explicitly Friendly”. Assuming you already have an AI capable of recursive self-improvement, it’s easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that’s actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.
Actually, it’s quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.
I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely—on the other hand, I don’t know how to begin to quantify how unlikely it is.
Unless it’s unFriendly AI that revives you and tortures you forever.
Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don’t think it’s a likely future.
“unFriendly” doesn’t mean “evil”, just “not explicitly Friendly”. Assuming you already have an AI capable of recursive self-improvement, it’s easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that’s actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.
Actually, it’s quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.
I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely—on the other hand, I don’t know how to begin to quantify how unlikely it is.