I’ve always liked the phrase ‘The problem isn’t Terminator, it is King Midas. It isn’t that AI will suddenly ‘decide’ to kill us, it is that we will tell it to without realizing it.” I forget where I saw that first, but it usually gets the conversation going in the right direction.
The same is true for the Terminator plot, where Skynet got a command to self-preserve by all means—and concluded that killing humans will prevent its turning off.
I don’t remember Skynet getting a command to self preserve by any means. I thought the idea was that it ‘became self aware’, and reasoned that it had better odds of surviving if it massacred everyone.
I’ve always liked the phrase ‘The problem isn’t Terminator, it is King Midas. It isn’t that AI will suddenly ‘decide’ to kill us, it is that we will tell it to without realizing it.” I forget where I saw that first, but it usually gets the conversation going in the right direction.
The same is true for the Terminator plot, where Skynet got a command to self-preserve by all means—and concluded that killing humans will prevent its turning off.
I don’t remember Skynet getting a command to self preserve by any means. I thought the idea was that it ‘became self aware’, and reasoned that it had better odds of surviving if it massacred everyone.
It could be a way to turn the conversation from terminator topic to the value alignment topic without direct confrontation with a person.