But if you knew anything about the process leading up to the development of successful AI, you’d have some beliefs about how likely the AI is to perpetrate a ruse for the purpose of escaping.
But I get the difficulty: how well do you have to understand a being’s nature before you feel confident in predicting its motivations/values?
Good point.
But if you knew anything about the process leading up to the development of successful AI, you’d have some beliefs about how likely the AI is to perpetrate a ruse for the purpose of escaping.
But I get the difficulty: how well do you have to understand a being’s nature before you feel confident in predicting its motivations/values?
So the key to containing an AI is to have a technologically-ignorant rationalist babysit it?