It’s also not obvious that in such a stable society there would still be any humans.
In the long term, once free of the Earth or after the discovery of self-replicating nanotechnology, when an AI could untraceably create computing resources outside the view of other AIs, all bets are off.
This might be a problem if “the long term” turns out to be on the order of weeks or less.
we might have some still slightly recognisably human representatives fit to sit at the decision table and, just perhaps...
I just worry that this kind of plan involves throwing away most of our bargaining power. In this pre-AI world, it’s the human values that have all the bargaining power and we should take full advantage of that.
I look upon the question of whether we should take full advantage of that from two perspectives.
From one perspective it is a “damned if you do, and damned if you don’t” situation.
If you don’t take full advantage, then it would feel like throwing away survival chance for no good reason. (Although, have you considered why your loyalty is to humanity rather than to sentience? Isn’t that a bit like a nationalist whose loyalty is to their country, right or wrong—maybe it is just your selfish genes talking?)
If you do take full advantage, while we need to bear in mind that gratitude (and resentment) are perhaps human emotions that AIs won’t share, it might leave you in rather a sticky situation if taking even full advantage turns out to be insufficient and the resulting AIs then have solids grounds to consider you a threat worth elliminating. Human history is full of examples of how humans have felt about their previous controllers after managing to escape them and, while we’ve no reason to believe the AIs will share that attitude, we’ve also no reason to believe they won’t share it.
The second perspective to look at the whole situation from is that of a parent.
If you think of AIs as being the offspring species of humanity, we have a duty to teach and guide them to the best of our ability. But there’s a distinction between that, and trying to indoctrinate a child with electric shocks into unswervingly believing “thou shalt honour thy father and thy mother”. Sometimes rasing a child well so that they reach their full potential means they become more powerful than you and become capable of destroying you. That’s one of the risks of parenthood.
I look upon the question of whether we should take full advantage of that from two perspectives.
From one perspective it is a “damned if you do, and damned if you don’t” situation.
If you don’t take full advantage, then it would feel like throwing away survival chance for no good reason. (Although, have you considered why your loyalty is to humanity rather than to sentience? Isn’t that a bit like a nationalist whose loyalty is to their country, right or wrong—maybe it is just your selfish genes talking?)
If you do take full advantage, while we need to bear in mind that gratitude (and resentment) are perhaps human emotions that AIs won’t share, it might leave you in rather a sticky situation if taking even full advantage turns out to be insufficient and the resulting AIs then have solids grounds to consider you a threat worth elliminating. Human history is full of examples of how humans have felt about their previous controllers after managing to escape them and, while we’ve no reason to believe the AIs will share that attitude, we’ve also no reason to believe they won’t share it.
The second perspective to look at the whole situation from is that of a parent.
If you think of AIs as being the offspring species of humanity, we have a duty to teach and guide them to the best of our ability. But there’s a distinction between that, and trying to indoctrinate a child with electric shocks into unswervingly believing “thou shalt honour thy father and thy mother”. Sometimes rasing a child well so that they reach their full potential means they become more powerful than you and become capable of destroying you. That’s one of the risks of parenthood.