I think a critical issue is that disempowerment implies loss of control we currently have—but this is poorly defined, and left unfortunately implicit.
If we concretize the idea of control, the extreme version is that if humanity unanimously chooses some action, it will occur. This is a bit overstated, but the obvious weak version is already untrue; if a majority of citizens in a country want some action to occur, say, a specific company to turn off a datacenter and stop running a given AI model, in a liberal democracy that majority cannot reliably ensure that it does happen, since there are protections and processes in place. In fact, the intermediate version is probably untrue as well—even a supermajority cannot reliably dictate this type of action, and certainly cannot decide it quickly.
Based on this, I think critics of the gradual disempowerment argument would make a reasonable point; this isn’t a new thing, and it’s not even obviously being accelerated by AI more than to the extent that it happens via wealth or power concentration. Companies already ignore laws, power is already concentrated in few hands, and to date, this fact has little to do with AI.
This seems incorrect over the last couple of years. But also incorrect historically if you broaden from AI to “information processing and person modelling technologies that help turn money into influence”.
But more generally, GD can be viewed as a continuation of historical trends or not. I think I’m more in the “continuation” camp vs. e.g. Duvenaud, who would stress that things change once humans become redundant.
I’m guessing we don’t actually strongly disagree here, but I think that unless you’re broadening / shortening “information processing and person modelling technologies” to “technologies”, it’s only been a trend for a couple decades at most—and even with that broadening, it’s only been true under some very narrow circumstances in the west recently.
I think supermajorities could do things like this pretty reliably, if it’s something they care a lot about. In the US, if a supermajority of people in congress want something to happen, and are incentivized to do vote their beliefs because a supermajority of voters agree, then they can probably pass a law to make it happen. The president would probably be part of the supermajority and therefore cooperative, and it might work even if they aren’t. Laws can do a lot.
Of course, it’s easy to construct supermajorities of citizens who can’t do this kind of thing, if they disproportionately include non-powerful people and don’t include powerful people. But that’s more about power being unevenly distributed between humans, and less about humans as a collective being disempowered.
Those last two words are doing a supermajority of the work!
And yes, it’s about uneven distribution of power—but that power gradient can shift towards ASI pretty quickly, which is the argument. Still, the normative concern that most humans lost control already stands.
The president would probably be part of the supermajority and therefore cooperative, and it might work even if they aren’t.
We’re seeing this fail in certain places in real time today in the US. But regardless, the assumption of correlation of preferences often fails, partly due to the power imbalances themselves.
I think a critical issue is that disempowerment implies loss of control we currently have—but this is poorly defined, and left unfortunately implicit.
If we concretize the idea of control, the extreme version is that if humanity unanimously chooses some action, it will occur. This is a bit overstated, but the obvious weak version is already untrue; if a majority of citizens in a country want some action to occur, say, a specific company to turn off a datacenter and stop running a given AI model, in a liberal democracy that majority cannot reliably ensure that it does happen, since there are protections and processes in place. In fact, the intermediate version is probably untrue as well—even a supermajority cannot reliably dictate this type of action, and certainly cannot decide it quickly.
Based on this, I think critics of the gradual disempowerment argument would make a reasonable point; this isn’t a new thing, and it’s not even obviously being accelerated by AI more than to the extent that it happens via wealth or power concentration. Companies already ignore laws, power is already concentrated in few hands, and to date, this fact has little to do with AI.
This seems incorrect over the last couple of years. But also incorrect historically if you broaden from AI to “information processing and person modelling technologies that help turn money into influence”.
But more generally, GD can be viewed as a continuation of historical trends or not. I think I’m more in the “continuation” camp vs. e.g. Duvenaud, who would stress that things change once humans become redundant.
I’m guessing we don’t actually strongly disagree here, but I think that unless you’re broadening / shortening “information processing and person modelling technologies” to “technologies”, it’s only been a trend for a couple decades at most—and even with that broadening, it’s only been true under some very narrow circumstances in the west recently.
Yeah I roughly agree.
EtA: I might say algorithmic trading and marketting (which are older) are alread doing this, e.g., but it’s a bit subjective and uncertain.
I think supermajorities could do things like this pretty reliably, if it’s something they care a lot about. In the US, if a supermajority of people in congress want something to happen, and are incentivized to do vote their beliefs because a supermajority of voters agree, then they can probably pass a law to make it happen. The president would probably be part of the supermajority and therefore cooperative, and it might work even if they aren’t. Laws can do a lot.
Of course, it’s easy to construct supermajorities of citizens who can’t do this kind of thing, if they disproportionately include non-powerful people and don’t include powerful people. But that’s more about power being unevenly distributed between humans, and less about humans as a collective being disempowered.
Those last two words are doing a supermajority of the work!
And yes, it’s about uneven distribution of power—but that power gradient can shift towards ASI pretty quickly, which is the argument. Still, the normative concern that most humans lost control already stands.
We’re seeing this fail in certain places in real time today in the US. But regardless, the assumption of correlation of preferences often fails, partly due to the power imbalances themselves.