I think the key issue for liberalism under AGI/ASI is that AGI/ASI makes value alignment matter way, way more to a polity, and in particular you cannot get a polity to make you live under AGI/ASI if the AGI/ASI doesn’t want you to live, because you are economically useless.
Liberalism’s goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life.
Indeed, I think part of the difficulty of AI alignment is lots of people have trouble realizing that the basic things they take for granted under the current liberal order would absolutely fall away if AIs didn’t value their lives intrinisically, and had selfish utility functions.
The goal of liberalism is to make a society where vast value differences can interact without negative/0-sum conflict and instead trade peacefully, but this is not possible once we create a society where AIs can do all the work without human labor being necessary.
I like Vladimir Nesov’s comment, and while I have disagreements, they’re not central to his point, and the point still works, just in amended form:
The key trouble is all the power generators that sustain the AI would break within weeks or months, and the issue is even if they could build GPUs, they’d have no power to run them within at most 2 weeks:
https://www.reddit.com/r/ZombieSurvivalTactics/comments/s6augo/comment/ht4iqej/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
https://www.reddit.com/r/explainlikeimfive/comments/klupbw/comment/ghb0fer/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Realistically, we are looking at power grid collapses within days.
And without power, none of the other building projects could work, because they’d stop receiving energy, and importantly this means the AI is on a tight timer, and some of this is partially due to expectations that the first transformative useful AI will use more compute than you project, even conditional on a different paradigm being introduced like brain-like AGIs, but another part of my view is that this is just one of many examples where humans need to constantly maintain stuff in order for the stuff to work, and if we don’t assume tech that can just solve logistics is available within say 1 year, it will take time for AIs to actually survive without humans, and this time is almost certainly closer to months or years than weeks or days.
The hard part of AI takeover isn’t killing all humans, it’s in automating enough of the economy (including developing tech like nanotech) such that the humans stop mattering, and while AIs can do this, it takes actual time, and that time is really valuable in fast moving scenarios.
I didn’t say AIs can’t take over, and I very critically did not say that AI takeover can’t happen in the long run.
I only said AI takeover isn’t trivial if we don’t assume logistics are solvable.
But to deal with the Stalin example, the answer for how he took over was basically that he was willing to wait a long time, and in particular he used both persuasion and the fact that he already had a significant amount of power by having the General Secretary, and his takeover was basically by allying with loyalists and in particular strategically breaking alliances that he had made, and violence was used later on to show that no one was safe from him.
Which is actually how I expect successful AI takeover to happen in practice, if it does happen.
Very importantly, Stalin didn’t need to create an entire civilization out of nothing, or nearly nothing, and other people like Trotsky handled the logistics, though the takeover situation was far more preferable to the communist party as they both had popular support and didn’t have as long supply lines as the opposition forces like the Whites did, and they had a preexisting base of industry that was much easier to seize than modern industries.
This applies to most coups/transitions of power in that most of the successful coups aren’t battles between factions, but rather one group managing to make itself the new Schelling point over other groups.
@Richard_Ngo explains more below:
https://www.lesswrong.com/posts/d4armqGcbPywR3Ptc/power-lies-trembling-a-three-book-review#The_revolutionary_s_handbook
Most of my commentary in the last comment is either arguing that things can be made more continuous and slow than your story depicts, or arguing that your references don’t support what you claimed, and I did say that the cyberattack story is plausible, just that it didn’t support the idea that AIs could entirely replace civilization without automating away us first, which takes time.
This doesn’t show AI doom can’t happen, but it does matter for the probability estimates of many LWers on here, because it’s a hidden background assumption disagreement that underlies a lot of other disagreements.