Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
In some senses, we have done so many times, with human adults of differing intelligence and/or unequal information access, with adults and children, with humans and animals, and with humans and simpler autonomous systems (like sprites in games, or current robotic systems). Many relationships other than master-slave are possible, but I’m not sure any of the known solutions are desirable, and they’re definitely not universally agreed on as desirable. We can be the AI’s servants, children, pets, or autonomous-beings-within-strict-bounds-but-the-AI-can-shut-us-down-or-take-us-over-at-will. It’s much less clear to me that we can be moral or political or social peers in a way that is not a polite fiction.
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet “overthrowable” if they obviously overstep, then I think this could be acceptable.
Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
In some senses, we have done so many times, with human adults of differing intelligence and/or unequal information access, with adults and children, with humans and animals, and with humans and simpler autonomous systems (like sprites in games, or current robotic systems). Many relationships other than master-slave are possible, but I’m not sure any of the known solutions are desirable, and they’re definitely not universally agreed on as desirable. We can be the AI’s servants, children, pets, or autonomous-beings-within-strict-bounds-but-the-AI-can-shut-us-down-or-take-us-over-at-will. It’s much less clear to me that we can be moral or political or social peers in a way that is not a polite fiction.
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet “overthrowable” if they obviously overstep, then I think this could be acceptable.