I expected far greater pushback from doctors and lawyers, for example, than we have seen so far.
I believe it’s a matter of (correlated) motivated reasoning. My doctor and lawyer friends both seem to be excited about the time that AIs will save them as they do their jobs—in both professions there is immense efficiency to gain by automating more rote parts of the work (legal research, writing patient notes, dealing with insurance, etc.)—but seem to believe that AIs will never fully supplant them. When I press the issue, especially with my doctor friend, he tells me that regulations and insurance will save doctors, to which I say… sure, but only until e.g. AI-powered medicine has real statistics showing they have better health outcomes than human doctors as most of us here would expect. I can imagine the same initial defense from a lawyer who cannot yet imagine an AI being allowed by regulation to represent someone.
Then there’s all the usual stuff about how difficult it can be for many people to imagine the world changing so much in the next several years.
I’ve also heard this argument: sure, AI might take everyone’s job, but if that’s inevitable anyway, it’s still rational to be in an elite profession because they will last slightly longer and/or capture more economic surplus before society breaks down, if it does. On that point, I agree.
More broadly, speaking from the outside (I am a software engineer), to me the cultures of the elite professions have always seemed rather self-assured: everything is fine, nothing is a problem and these elite professionals will always be rich… which means that when the first credible threat to that standing hits, like a jurisdiction allowing fully autonomous doctors/lawyers/etc., it will be pandemonium.
I think that, roughly speaking, these are possible outcomes:
all humans dead
a few humans (owners of the first super-intelligent AIs) on the top, everyone else is their slave
the existing social hierarchies more or less preserved somehow
new social hierarchies that from our perspective would appear random
all people are equal
From a doomer perspective, the items 2-5 are not worth discussing, but if we ignore that...
Option 2 is only actionable for you, if you have the power (economical or military) to get control over the first super-intelligent AI, otherwise you are screwed; almost everyone is in the latter group, including all current doctors and lawyers. Option 4 is not actionable. Option 5 means it doesn’t matter what you do.
Option 3 seems… not completely implausible… and provides a reason to try staying upper-middle class as long as possible, just in case the AI will decide to preserve the existing social structures in the future.
The rich people may keep a few human slaves as a status thing, or maybe because they enjoy having power over humans. I agree that economically human slaves won’t be valuable.
I believe it’s a matter of (correlated) motivated reasoning. My doctor and lawyer friends both seem to be excited about the time that AIs will save them as they do their jobs—in both professions there is immense efficiency to gain by automating more rote parts of the work (legal research, writing patient notes, dealing with insurance, etc.)—but seem to believe that AIs will never fully supplant them. When I press the issue, especially with my doctor friend, he tells me that regulations and insurance will save doctors, to which I say… sure, but only until e.g. AI-powered medicine has real statistics showing they have better health outcomes than human doctors as most of us here would expect. I can imagine the same initial defense from a lawyer who cannot yet imagine an AI being allowed by regulation to represent someone.
Then there’s all the usual stuff about how difficult it can be for many people to imagine the world changing so much in the next several years.
I’ve also heard this argument: sure, AI might take everyone’s job, but if that’s inevitable anyway, it’s still rational to be in an elite profession because they will last slightly longer and/or capture more economic surplus before society breaks down, if it does. On that point, I agree.
More broadly, speaking from the outside (I am a software engineer), to me the cultures of the elite professions have always seemed rather self-assured: everything is fine, nothing is a problem and these elite professionals will always be rich… which means that when the first credible threat to that standing hits, like a jurisdiction allowing fully autonomous doctors/lawyers/etc., it will be pandemonium.
I think that, roughly speaking, these are possible outcomes:
all humans dead
a few humans (owners of the first super-intelligent AIs) on the top, everyone else is their slave
the existing social hierarchies more or less preserved somehow
new social hierarchies that from our perspective would appear random
all people are equal
From a doomer perspective, the items 2-5 are not worth discussing, but if we ignore that...
Option 2 is only actionable for you, if you have the power (economical or military) to get control over the first super-intelligent AI, otherwise you are screwed; almost everyone is in the latter group, including all current doctors and lawyers. Option 4 is not actionable. Option 5 means it doesn’t matter what you do.
Option 3 seems… not completely implausible… and provides a reason to try staying upper-middle class as long as possible, just in case the AI will decide to preserve the existing social structures in the future.
Post-AGI humans can’t be centrally slaves, because human labor won’t be valuable.
The rich people may keep a few human slaves as a status thing, or maybe because they enjoy having power over humans. I agree that economically human slaves won’t be valuable.
Keeping people as a commodity for acasual trade or pets seems like a more likely option.