And so I need to point out that when people enslaved human beings of equal intelligence with limited information access, it still didn’t end well for the slavers.
I would point out that for thousands of years, it very much often did. Sometimes spectacularly so. Even in the US, it went very well for many of the slavers, and only ended poorly for their many-times-great-grandchildren who didn’t get a say in the original policy discussion.
I do in fact believe this is relevant, since in the context of AI I expect that early successes in aligning weak systems are likely to breed complacency that people will pay for sooner or later, and would like us to avoid the possibility of current-humanity near-guaranteeing a future apocalypse.
Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
In some senses, we have done so many times, with human adults of differing intelligence and/or unequal information access, with adults and children, with humans and animals, and with humans and simpler autonomous systems (like sprites in games, or current robotic systems). Many relationships other than master-slave are possible, but I’m not sure any of the known solutions are desirable, and they’re definitely not universally agreed on as desirable. We can be the AI’s servants, children, pets, or autonomous-beings-within-strict-bounds-but-the-AI-can-shut-us-down-or-take-us-over-at-will. It’s much less clear to me that we can be moral or political or social peers in a way that is not a polite fiction.
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet “overthrowable” if they obviously overstep, then I think this could be acceptable.
I would point out that for thousands of years, it very much often did. Sometimes spectacularly so. Even in the US, it went very well for many of the slavers, and only ended poorly for their many-times-great-grandchildren who didn’t get a say in the original policy discussion.
I do in fact believe this is relevant, since in the context of AI I expect that early successes in aligning weak systems are likely to breed complacency that people will pay for sooner or later, and would like us to avoid the possibility of current-humanity near-guaranteeing a future apocalypse.
Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
In some senses, we have done so many times, with human adults of differing intelligence and/or unequal information access, with adults and children, with humans and animals, and with humans and simpler autonomous systems (like sprites in games, or current robotic systems). Many relationships other than master-slave are possible, but I’m not sure any of the known solutions are desirable, and they’re definitely not universally agreed on as desirable. We can be the AI’s servants, children, pets, or autonomous-beings-within-strict-bounds-but-the-AI-can-shut-us-down-or-take-us-over-at-will. It’s much less clear to me that we can be moral or political or social peers in a way that is not a polite fiction.
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet “overthrowable” if they obviously overstep, then I think this could be acceptable.