My views on AI have indeed changed over time, on a variety of empirical and normative questions, but I think you’re inferring larger changes than are warranted from that comment in isolation.
The term “AI takeover” is ambiguous. It conjures an image of a violent AI revolution, but the literal meaning of the term also applies to benign scenarios in which AIs get legal rights and get hired to run our society fair and square. A peaceful AI takeover would be good, IMO.
In fact, I still largely agree with the comment you quoted. The described scenario remains my best guess for how things could go wrong with AI. However, I chose my words poorly in that comment. Specifically, I was not clear enough about what I meant by “disempowerment.”
I should have distinguished between two different types of human disempowerment. The first type is violent disempowerment, where AIs take power by force. I consider this morally bad. The second type is peaceful or voluntary disempowerment, where humans willingly transfer power to AIs through legal and economic processes. I think this second type will likely be morally good, or at least morally neutral.
My moral objection to “AI takeover”, both now and back then, applies primarily to scenarios where AIs suddenly seize power through unlawful or violent means, against the wishes of human society. I have, and had, far fewer objections to scenarios where AIs gradually gain power by obtaining legal rights and engaging in voluntary trade and cooperation with humans.
The second type of scenario is what I hope I am working to enable, not the first. My reasoning for accelerating AI development is straightforward: accelerating AI will produce medical breakthroughs that could save billions of lives. It will also accelerate dramatic economic and technological progress that will improve quality of life for people everywhere. These benefits justify pushing forward with AI development.
I do not think violent disempowerment scenarios are impossible, just unlikely. And I think that pausing AI development would not meaningfully reduce the probability of such scenarios occurring. Even if pausing AI did reduce this risk, I think the probability of violent disempowerment is low enough that accepting this risk is justified by the billions of lives that faster AI development could save.
My moral objection to “AI takeover”, both now and back then, applies primarily to scenarios where AIs suddenly seize power through unlawful or violent means, against the wishes of human society. I have, and had, far fewer objections to scenarios where AIs gradually gain power by obtaining legal rights and engaging in voluntary trade and cooperation with humans.
What about a scenario where no laws are broken, but over the course of months to years large numbers of humans are unable to provide for themselves as a consequence of purely legal and non violent actions by AIs? A toy example would be AIs purchasing land used for agriculture for other means (you might consider this an indirect form of violence).
It’s a bit of a leading question, but 1. The way this is framed seems to have a profound reverence for laws and 20-21st century economic behavior
2. I’m struggling to picture how you envision the majority of humans will continue to provide for themselves economically in a world where we aren’t on the critical path for cognitive labor (Some kind of UBI? Do you believe the economy will always allow for humans to participate and be compensated more than their physical needs in some way?)
What about a scenario where no laws are broken, but over the course of months to years large numbers of humans are unable to provide for themselves as a consequence of purely legal and non violent actions by AIs? A toy example would be AIs purchasing land used for agriculture for other means (you might consider this an indirect form of violence).
I’d consider it bad if AIs take actions that result in a large fraction of humans becoming completely destitute and dying as a result.
But I think such an outcome would be bad whether it’s caused by a human or an AI. The more important question, I think, is whether such an outcome is likely to occur if we grant AIs legal rights. The answer to this, I think, is no. I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
Generally I think biological humans will receive income through charitable donations, government welfare programs, in-kind support from family members, interest, dividends, by selling their assets, or by working human-specific service jobs where consumers intrinsically prefer hiring human labor (e.g., maybe childcare). Given vast prosperity, these income sources seem sufficient to provide most humans with an adequate, if not incredibly high, standard of living.
Thanks for the reply, it was helpful. I elaborated my perspective and pointed out some concrete disagreements with how labor automation would play out, I wonder if you can identify the cruxes in my model of how the economy and automated labor interact.
I’d frame my perspective as; “We should not aim to put society in a position where >90%+ of humans need government welfare programs or charity to survive while vast numbers of automated agents perform the labor that humans are currently depending on to survive.” I don’t believe we have the political wisdom or resilience to steer our world in this direction while preserving good outcomes for existing humans.
We live in a something like a unique balance where through companies, the economy provides individuals the opportunity to sustain themselves and specialize while contributing to a larger whole which typically provides goods and services which benefit other humans. If we create digital minds and robots to naively accelerate these emergent corporate entities’ abilities to generate profit, we lose an important ingredient in this balance, human bargaining power. Further, if we had the ability to create and steer powerful digital minds (which is also contentious), it doesn’t seem obvious that labor automation is a framing that would lead to positive experiences for humans or the minds.
I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
I’m skeptical that economic abundance driven by automated agents will by default manifest as an increased quality and quantity of goods and services enjoyed by humans, and that humans will continue to have the economic leverage to incentivize these human specific goods
working human-specific service jobs where consumers intrinsically prefer hiring human labor
I expect the amount of roles/tasks available where consumers prefer hiring humans is a rounding error compared to the amount of humans that depend on work
...benign scenarios in which AIs get legal rights and get hired to run our society fair and square. A peaceful AI takeover would be good, IMO.
...humans willingly transfer power to AIs through legal and economic processes. I think this second type will likely be morally good, or at least morally neutral.
Why do you believe this? For my part, one of the major ruinous scenarios on my mind is one where humans delegate control to AIs that then goal-misgeneralize, breaking complex systems in the process; another is one where AIs outcompete ~all human economic efforts “fair and square” and end up owning everything, including (e.g.) rights to all water, partially because no one felt strongly enough about ensuring an adequate minimum baseline existence for humans. What makes those possibilities so unlikely to you?
My views on AI have indeed changed over time, on a variety of empirical and normative questions, but I think you’re inferring larger changes than are warranted from that comment in isolation.
Here’s a comment from 2023 where I said:
In fact, I still largely agree with the comment you quoted. The described scenario remains my best guess for how things could go wrong with AI. However, I chose my words poorly in that comment. Specifically, I was not clear enough about what I meant by “disempowerment.”
I should have distinguished between two different types of human disempowerment. The first type is violent disempowerment, where AIs take power by force. I consider this morally bad. The second type is peaceful or voluntary disempowerment, where humans willingly transfer power to AIs through legal and economic processes. I think this second type will likely be morally good, or at least morally neutral.
My moral objection to “AI takeover”, both now and back then, applies primarily to scenarios where AIs suddenly seize power through unlawful or violent means, against the wishes of human society. I have, and had, far fewer objections to scenarios where AIs gradually gain power by obtaining legal rights and engaging in voluntary trade and cooperation with humans.
The second type of scenario is what I hope I am working to enable, not the first. My reasoning for accelerating AI development is straightforward: accelerating AI will produce medical breakthroughs that could save billions of lives. It will also accelerate dramatic economic and technological progress that will improve quality of life for people everywhere. These benefits justify pushing forward with AI development.
I do not think violent disempowerment scenarios are impossible, just unlikely. And I think that pausing AI development would not meaningfully reduce the probability of such scenarios occurring. Even if pausing AI did reduce this risk, I think the probability of violent disempowerment is low enough that accepting this risk is justified by the billions of lives that faster AI development could save.
What about a scenario where no laws are broken, but over the course of months to years large numbers of humans are unable to provide for themselves as a consequence of purely legal and non violent actions by AIs? A toy example would be AIs purchasing land used for agriculture for other means (you might consider this an indirect form of violence).
It’s a bit of a leading question, but
1. The way this is framed seems to have a profound reverence for laws and 20-21st century economic behavior
2. I’m struggling to picture how you envision the majority of humans will continue to provide for themselves economically in a world where we aren’t on the critical path for cognitive labor (Some kind of UBI? Do you believe the economy will always allow for humans to participate and be compensated more than their physical needs in some way?)
I’d consider it bad if AIs take actions that result in a large fraction of humans becoming completely destitute and dying as a result.
But I think such an outcome would be bad whether it’s caused by a human or an AI. The more important question, I think, is whether such an outcome is likely to occur if we grant AIs legal rights. The answer to this, I think, is no. I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
Generally I think biological humans will receive income through charitable donations, government welfare programs, in-kind support from family members, interest, dividends, by selling their assets, or by working human-specific service jobs where consumers intrinsically prefer hiring human labor (e.g., maybe childcare). Given vast prosperity, these income sources seem sufficient to provide most humans with an adequate, if not incredibly high, standard of living.
Thanks for the reply, it was helpful. I elaborated my perspective and pointed out some concrete disagreements with how labor automation would play out, I wonder if you can identify the cruxes in my model of how the economy and automated labor interact.
I’d frame my perspective as; “We should not aim to put society in a position where >90%+ of humans need government welfare programs or charity to survive while vast numbers of automated agents perform the labor that humans are currently depending on to survive.” I don’t believe we have the political wisdom or resilience to steer our world in this direction while preserving good outcomes for existing humans.
We live in a something like a unique balance where through companies, the economy provides individuals the opportunity to sustain themselves and specialize while contributing to a larger whole which typically provides goods and services which benefit other humans. If we create digital minds and robots to naively accelerate these emergent corporate entities’ abilities to generate profit, we lose an important ingredient in this balance, human bargaining power. Further, if we had the ability to create and steer powerful digital minds (which is also contentious), it doesn’t seem obvious that labor automation is a framing that would lead to positive experiences for humans or the minds.
I’m skeptical that economic abundance driven by automated agents will by default manifest as an increased quality and quantity of goods and services enjoyed by humans, and that humans will continue to have the economic leverage to incentivize these human specific goods
I expect the amount of roles/tasks available where consumers prefer hiring humans is a rounding error compared to the amount of humans that depend on work
Why do you believe this? For my part, one of the major ruinous scenarios on my mind is one where humans delegate control to AIs that then goal-misgeneralize, breaking complex systems in the process; another is one where AIs outcompete ~all human economic efforts “fair and square” and end up owning everything, including (e.g.) rights to all water, partially because no one felt strongly enough about ensuring an adequate minimum baseline existence for humans. What makes those possibilities so unlikely to you?
[I think this comment is too aggressive and I don’t really want to shoulder an argument right now]
With apologies to @Garrett Baker .
I did not read Matthew’s above comment as representing any views other than his own.