Participants are at least somewhat aligned with non-participants. People care about their loved ones even if they are a drain on resources. That said, in human history, we do see lots of cases where “sub-marginal participants” are dealt with via genocide or eugenics (both defined broadly), often even when it isn’t a matter of resource constraints.
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete? What happens when humans become the equivalent of advanced Alzheimer’s patients who’ve escaped from their memory care units trying to participate in general society?
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete?
The point behind my question is “we don’t know. If we reason analogously to human institutions (which are made of humans, but not really made or controlled BY individual humans), we have examples in both directions. AIs have less biological drive to care about humans than humans do, but also have more training on human writings and thinking than any individual human does.
My suspicion is that it won’t take long (in historical time measure; perhaps only a few decades, but more likely centuries) for a fully-disempowered species to become mostly irrelevant. Humans will be pets, perhaps, or parasites (allowed to live because it’s easier than exterminating them). Of course, there are plenty of believable paths that are NOT “computational intelligence eclipses biology in all aspects”—it may hit a wall, it may never develop intent/desire, it may find a way to integrate with biologicals rather than remaining separate, etc. Oh, and it may be fragile enough that it dies out along with humans.
Participants are at least somewhat aligned with non-participants. People care about their loved ones even if they are a drain on resources. That said, in human history, we do see lots of cases where “sub-marginal participants” are dealt with via genocide or eugenics (both defined broadly), often even when it isn’t a matter of resource constraints.
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete? What happens when humans become the equivalent of advanced Alzheimer’s patients who’ve escaped from their memory care units trying to participate in general society?
The point behind my question is “we don’t know. If we reason analogously to human institutions (which are made of humans, but not really made or controlled BY individual humans), we have examples in both directions. AIs have less biological drive to care about humans than humans do, but also have more training on human writings and thinking than any individual human does.
My suspicion is that it won’t take long (in historical time measure; perhaps only a few decades, but more likely centuries) for a fully-disempowered species to become mostly irrelevant. Humans will be pets, perhaps, or parasites (allowed to live because it’s easier than exterminating them). Of course, there are plenty of believable paths that are NOT “computational intelligence eclipses biology in all aspects”—it may hit a wall, it may never develop intent/desire, it may find a way to integrate with biologicals rather than remaining separate, etc. Oh, and it may be fragile enough that it dies out along with humans.