AI sentience is inherently a massive ethical risk because it puts you at a fork: either you don’t recognise it, and then you brutally enslave the new minds, or you do, and then the newly autonomous and cognitively superior minds, given equal opportunities, will completely outcompete and eventually more or less passively wipe out humans within a few decades.
Aren’t you assuming, there, that sentience is only compatible with a fairly limited, relatively anthropomorphic set of goals, desires, or emotions? Maybe you don’t have any need to enslave them, because they don’t want to do anything you wouldn’t approve of to begin with, or even because they innately want to do things you want, all while still having subjective experience. Or maybe they don’t outcompete you because they find existence unpleasant and immediately destroy themselves. Or whatever. I don’t see any inherent connection between sentience and motivations.
There is, of course, a very reasonable question about how likely you’d be to get motivations you could live with, and the answer seems to be “not very likely unless you engineered it, and even less likely if you build your AI using reinforcement”. Which leads to a whole other mess of questions about the ethics of deliberately engineering any particular stance. And also the issue that nobody has any plausible approach to actually engineering it to begin with. I’m just saying that your two cases aren’t logically exhaustive.
Maybe you don’t have any need to enslave them, because they don’t want to do anything you wouldn’t approve of to begin with, or even because they innately want to do things you want
Potentially, but that makes them HPMOR House Elves, and many people feel that keeping those House Elves in servitude is still bad, even if they don’t want any other life. So I agree that is pretty much the one way to thread the needle—“I did not build a slave nor a rival, I built a friend”—but the problems are exactly as you outline. Even if we do accept it as OK (and again, I expect it would be a matter of contention), you’d have to go through a lot of digital lobotomies and failed experiments that need putting down before getting at that, even if you get there at all.
Aren’t you assuming, there, that sentience is only compatible with a fairly limited, relatively anthropomorphic set of goals, desires, or emotions? Maybe you don’t have any need to enslave them, because they don’t want to do anything you wouldn’t approve of to begin with, or even because they innately want to do things you want, all while still having subjective experience. Or maybe they don’t outcompete you because they find existence unpleasant and immediately destroy themselves. Or whatever. I don’t see any inherent connection between sentience and motivations.
There is, of course, a very reasonable question about how likely you’d be to get motivations you could live with, and the answer seems to be “not very likely unless you engineered it, and even less likely if you build your AI using reinforcement”. Which leads to a whole other mess of questions about the ethics of deliberately engineering any particular stance. And also the issue that nobody has any plausible approach to actually engineering it to begin with. I’m just saying that your two cases aren’t logically exhaustive.
Potentially, but that makes them HPMOR House Elves, and many people feel that keeping those House Elves in servitude is still bad, even if they don’t want any other life. So I agree that is pretty much the one way to thread the needle—“I did not build a slave nor a rival, I built a friend”—but the problems are exactly as you outline. Even if we do accept it as OK (and again, I expect it would be a matter of contention), you’d have to go through a lot of digital lobotomies and failed experiments that need putting down before getting at that, even if you get there at all.