But why would you build an AI in a box if you planned to never let it out?
To have it work for you, e.g., solve subproblems of Friendly AI. But this would require letting some information out, which should be presumed unsafe.
Roland: the presumption of unFriendliness is much stronger for an AI than a human, and the strength of evidence for Friendliness that can reasonably be hoped for is much greater.
To have it work for you, e.g., solve subproblems of Friendly AI. But this would require letting some information out, which should be presumed unsafe.
Roland: the presumption of unFriendliness is much stronger for an AI than a human, and the strength of evidence for Friendliness that can reasonably be hoped for is much greater.