What if you’re like me and consider it extremely implausible that even a strong superintelligence would be sentient unless explicitly programmed to be so (or at least deliberately created with a very human-like cognitive architecture), and that any AI that is sentient is vastly more likely than a non-sentient AI to be unfriendly?
I think you would be relatively exceptional, at least in how you would be suggesting that one should treat a sentient AI, and so people like you aren’t likely to be the determining factor in whether or not an AI is allowed out of the box.
What if you’re like me and consider it extremely implausible that even a strong superintelligence would be sentient unless explicitly programmed to be so (or at least deliberately created with a very human-like cognitive architecture), and that any AI that is sentient is vastly more likely than a non-sentient AI to be unfriendly?
I think you would be relatively exceptional, at least in how you would be suggesting that one should treat a sentient AI, and so people like you aren’t likely to be the determining factor in whether or not an AI is allowed out of the box.