Do we have a good story about why this hasn’t already happened to humans? Systems don’t actually care about the individuals they comprise, and certainly don’t care about the individuals that are neither taxpayers, selectorate, contributors, or customers.
Why do modern economies support so many non-participants? Let alone the marginal and slightly sub-marginal workers, which don’t cost much and may have option value or be useful to keep money moving in some way, there are a lot who are clearly a drain on resources.
I think the framework from “Dictator’s Handbook” can be applied: citizens get as much freedom an benefits as is (short-term) optimal for the rulers. For example, if a country needs skilled labor and transportation to create tax revenue, then you can predict the govt will fund schools, roads and maybe even hospitals. OTOH if the country has rich deposits of gold located near the ports, then there’s no need for any of that.
Since reading this book I am also very worried by scenarios of human disempowerment. I’ve tried to ask some questions around it:
I wonder if this is somehow harder to understand for citizens of USA, than for someone from a country which didn’t care about its citizens at all. For example, after Lukashenko was “elected” in Belarus, people went to the streets to protest, yet, this didn’t make any impression on the rulers. They didn’t have any bargaining power, it seems.
Participants are at least somewhat aligned with non-participants. People care about their loved ones even if they are a drain on resources. That said, in human history, we do see lots of cases where “sub-marginal participants” are dealt with via genocide or eugenics (both defined broadly), often even when it isn’t a matter of resource constraints.
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete? What happens when humans become the equivalent of advanced Alzheimer’s patients who’ve escaped from their memory care units trying to participate in general society?
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete?
The point behind my question is “we don’t know. If we reason analogously to human institutions (which are made of humans, but not really made or controlled BY individual humans), we have examples in both directions. AIs have less biological drive to care about humans than humans do, but also have more training on human writings and thinking than any individual human does.
My suspicion is that it won’t take long (in historical time measure; perhaps only a few decades, but more likely centuries) for a fully-disempowered species to become mostly irrelevant. Humans will be pets, perhaps, or parasites (allowed to live because it’s easier than exterminating them). Of course, there are plenty of believable paths that are NOT “computational intelligence eclipses biology in all aspects”—it may hit a wall, it may never develop intent/desire, it may find a way to integrate with biologicals rather than remaining separate, etc. Oh, and it may be fragile enough that it dies out along with humans.
I think you miss the point where gradual disempowerment from AI happens as AI is more economically and otherwise performant option that systems can and will select instead of humans. Less reliance on human involvement leads to less bargaining power for humans.
But I mean we already have examples like molochian corporate structures that kind of lost the need to value individual humans as they can afford high churn rate and there are always other people to get a decently paid corporate job even if the conditions are … suboptimal.
Do we have a good story about why this hasn’t already happened to humans? Systems don’t actually care about the individuals they comprise, and certainly don’t care about the individuals that are neither taxpayers, selectorate, contributors, or customers.
Why do modern economies support so many non-participants? Let alone the marginal and slightly sub-marginal workers, which don’t cost much and may have option value or be useful to keep money moving in some way, there are a lot who are clearly a drain on resources.
I think the framework from “Dictator’s Handbook” can be applied: citizens get as much freedom an benefits as is (short-term) optimal for the rulers. For example, if a country needs skilled labor and transportation to create tax revenue, then you can predict the govt will fund schools, roads and maybe even hospitals. OTOH if the country has rich deposits of gold located near the ports, then there’s no need for any of that.
Since reading this book I am also very worried by scenarios of human disempowerment. I’ve tried to ask some questions around it:
Can homo-sapiens sustain an economy parallel to AIs?
How politics interacts with AI? (for some reason: negative 18 votes)
I wonder if this is somehow harder to understand for citizens of USA, than for someone from a country which didn’t care about its citizens at all. For example, after Lukashenko was “elected” in Belarus, people went to the streets to protest, yet, this didn’t make any impression on the rulers. They didn’t have any bargaining power, it seems.
importantly, in the dictators handbook case, some humans do actually get the power.
This is why I was stating the scenario in the paper cannot really lead to existential catastrophe, at least without other assumptions here.
Participants are at least somewhat aligned with non-participants. People care about their loved ones even if they are a drain on resources. That said, in human history, we do see lots of cases where “sub-marginal participants” are dealt with via genocide or eugenics (both defined broadly), often even when it isn’t a matter of resource constraints.
When humans fall well below marginal utility compared to AIs, will their priorities matter to a system that has made them essentially obsolete? What happens when humans become the equivalent of advanced Alzheimer’s patients who’ve escaped from their memory care units trying to participate in general society?
The point behind my question is “we don’t know. If we reason analogously to human institutions (which are made of humans, but not really made or controlled BY individual humans), we have examples in both directions. AIs have less biological drive to care about humans than humans do, but also have more training on human writings and thinking than any individual human does.
My suspicion is that it won’t take long (in historical time measure; perhaps only a few decades, but more likely centuries) for a fully-disempowered species to become mostly irrelevant. Humans will be pets, perhaps, or parasites (allowed to live because it’s easier than exterminating them). Of course, there are plenty of believable paths that are NOT “computational intelligence eclipses biology in all aspects”—it may hit a wall, it may never develop intent/desire, it may find a way to integrate with biologicals rather than remaining separate, etc. Oh, and it may be fragile enough that it dies out along with humans.
I think you miss the point where gradual disempowerment from AI happens as AI is more economically and otherwise performant option that systems can and will select instead of humans. Less reliance on human involvement leads to less bargaining power for humans.
But I mean we already have examples like molochian corporate structures that kind of lost the need to value individual humans as they can afford high churn rate and there are always other people to get a decently paid corporate job even if the conditions are … suboptimal.