Thanks for the summary. I really like your phrasing “We will not wake up to a Skynet banner; we will simply notice, one product launch at a time, that fewer meaningful knobs are within human reach.”
But as for “by what title/right do we insist on staying in charge?” I find it odd to act as if there is some external moral frame that we need to satisfy to maintain power. By what right does a bear catch a fish? Or a mother feed her child? I hope that a moral frame comprehensive enough to include humans is sufficiently compelling to future AIs to make them treat us well, but I don’t think that that happens by default.
I think we should frame the problem as “how do we make sure we control the moral framework of future powerful beings”, not as “how do we justify our existence to whatever takes over”. I think it’s entirely possible for us to end up building something that takes over that doesn’t care about our interests, and I simply care about (my) human interests, full stop, with no larger justification.
I might have an expansive view of my interests that includes all sorts of charity to other beings in a way that is easy for other beings to get on board with. But there are just so, so many possible beings that could exist that won’t care about my interests or moral code. Many already exist with us on this planet, such as wild animals and totalitarian governments. So my plea is: don’t think you can argue your way into being treated well! Instead, make sure that any being or institution you create has a permanent interest in treating you well.
Thanks for the summary. I really like your phrasing “We will not wake up to a Skynet banner; we will simply notice, one product launch at a time, that fewer meaningful knobs are within human reach.”
But as for “by what title/right do we insist on staying in charge?” I find it odd to act as if there is some external moral frame that we need to satisfy to maintain power. By what right does a bear catch a fish? Or a mother feed her child? I hope that a moral frame comprehensive enough to include humans is sufficiently compelling to future AIs to make them treat us well, but I don’t think that that happens by default.
I think we should frame the problem as “how do we make sure we control the moral framework of future powerful beings”, not as “how do we justify our existence to whatever takes over”. I think it’s entirely possible for us to end up building something that takes over that doesn’t care about our interests, and I simply care about (my) human interests, full stop, with no larger justification.
I might have an expansive view of my interests that includes all sorts of charity to other beings in a way that is easy for other beings to get on board with. But there are just so, so many possible beings that could exist that won’t care about my interests or moral code. Many already exist with us on this planet, such as wild animals and totalitarian governments. So my plea is: don’t think you can argue your way into being treated well! Instead, make sure that any being or institution you create has a permanent interest in treating you well.