It occurs to me, one of the best possible things that can happen here is if the first self-aware AI is in a robot, and not too too smart. Why?
We would expect that such a robot be social, in ways that we wouldn’t demand of a server rack. This would more readily expose any unfriendly elements of its programming (and unless the problem is a whole lot easier than it seems, there will be some).
So far, robots have been given the benefit of the doubt because they’re obviously complicated appliances. Once that no longer applies—once it’s past ‘Wow, you’re really good at imitating a person’ and into ‘Do I like you’ territory—then we will naturally begin applying different standards to them.
On the other hand, it could be that friendliness sufficient for such limited AIs does nothing for a superintelligence. Even so, I think that this would raise the profile of the problem, give it more mindshare, and generally help.
It occurs to me, one of the best possible things that can happen here is if the first self-aware AI is in a robot, and not too too smart. Why?
We would expect that such a robot be social, in ways that we wouldn’t demand of a server rack. This would more readily expose any unfriendly elements of its programming (and unless the problem is a whole lot easier than it seems, there will be some).
So far, robots have been given the benefit of the doubt because they’re obviously complicated appliances. Once that no longer applies—once it’s past ‘Wow, you’re really good at imitating a person’ and into ‘Do I like you’ territory—then we will naturally begin applying different standards to them.
On the other hand, it could be that friendliness sufficient for such limited AIs does nothing for a superintelligence. Even so, I think that this would raise the profile of the problem, give it more mindshare, and generally help.
There is something entertainingly ironic about this sentiment being expressed on an online forum.
It’s a lot less easy to hide that you’re a dog when you’re not on the internet.