I feel like this isn’t a very useful framework in practice. Do we have any reason to believe that alternate frameworks or ideologies such as communism wouldn’t have lead to AGI in a counterfactual world where they were more dominant or lasted longer? The Soviets had the Dead Hand system, which potentially contributed to x-risk from “AI” due to the risk of nuclear warfare, not that the system was particularly intelligent. China is the next closest competitor after the US in the modern AI race (not that it’s particularly communist in practice), and I can envision an alternate timeline where the Soviet Union survived in a communist state to the present date and also embraced modern AI.
More damningly, by disavowing intermediary metrics, you’re making the cut-off for evaluating the success of such an ideology the Heat Death of the universe.
Under this view you can totally have intermediary metrics, they just look more like “how much does your society avoid tragedies of the commons” rather than “what is the median quality of life”.
To be clear, this post was not intended as a subtle endorsement of communism. I agree with MondSemmel’s point that basically any system which produced slower economic growth would probably do better under this view, if only because AI development is slower.
Fair point. I would still say that given a specific level of technological advancement and global industrial capacity/globalization, the difference would be minimal. Consider a counterfactual: a world where Communism was far more successful and globally dominant. I expect that such a world would have had slower growth metrics than ours, perhaps they’d have developed similar transistor technology or our prowess with software engineering decades or even a century later. Conversely, they might well have had a more lax approach to intellectual property rights, such that training data was even easier to appropriate (fewer lawsuits, if any).
Even so, a few decades or even a century is barely any time at all. It’s not like we can easily tell if we’re living in a timeline where humanity advanced faster or slower than it would in aggregate. They might well find themselves in precisely the same position as we do, in terms of relative capabilities and x-risk, just at a slightly different date on the calendar. I can’t think of a strong reason for why a world with different ideologies to ours would have, say, differential focused on AI alignment theory without actual AI models to align. Even LessWrong’s theorizing before LLMs was much more abstract than modern interpretability or capability work in actual labs (this is not the same as claiming it was useless).
Finally, this framework still doesn’t strike me as helpful in practice. Even if we had good reason to think that some other political arrangement would been superior in terms of safety, that doesn’t make it very easy to pivot away. It’s hard enough to get companies and countries to coordinate on AI x-risk today, if we also had to reject modern globalized capitalism in the process, I do not see that working out. That’s today or tomorrow, it’s easy to wish that different historical events might have lead to better outcomes, but even that isn’t amenable to interventions without a time machine.
To rephrase, you find yourself in 2026 looking at machines approaching human intelligence. That strikes you as happening very quickly. I think that even in a counterfactual world where you observed the same thing in 1999 or 2045, it wouldn’t strike you as particularly different. We had a massive compute overhang before transformers came out, relative to the size of models being trained ~2017-2022. You could well be a (for the sake of argument) a Soviet researcher worrying about alignment of Communist-GPT in 2060, wishing that the capitalists had won because their ideology appeared so self-destructive and backwards that you believed it would have held back progress for centuries. We really can’t know, we’ve only got one world to observe, and even if we knew with confidence, we can’t do much about it.
I think OP’s perspective is valid, and I’m not at all convinced by your reply. We’re currently racing towards technological extinction with the utmost efficiency, to the point that it’s hard to imagine that any arbitrary alternative system of economics or governance could be worse by that metric, if only by virtue of producing less economic growth. I don’t see how nuclear warfare results in extinction, either; to my understanding it’s merely a global catastrophic risk, but not an existential one. And regarding your final paragraph, there are a lot of orders of magnitude between a system of governance that self-destructs in <10k years, vs. one that eventually succumbs to the Heat Death of the universe.
In a world where technological extinction is possible, tons of our virtues become vices:
Freedom: we appreciate freedoms like economic freedom, political freedom, and intellectual freedom. But that also means freedom to (economically, politically, scientifically) contribute to technological extinction. Like, I would not want to live in a global tyranny, but I can at least imagine how a global tyranny could in principle prevent AGI doom, namely by severely and globally restricting many freedoms. (Conversely, without these freedoms, maybe the tyrant wouldn’t learn about technological extinction in the first place.)
Democracy: politicians care about what the voters care about. But to avert extinction you need to make that a top priority, ideally priority number 1, which it can never be: no voter has ever gone extinct, so why should they care?
Egalitarianism: resulted in IQ denialism; if discourse around intelligence was less insane, that would help discussion of superintelligence.
Cosmopolitanism: resulted in pro-immigration and pro-asylum policy, which in turn precipitated both a global anti-immigration and an anti-elite backlash.
Economic growth: the more the better; results in rising living standards and makes people healthier and happier… right until the point of technological extinction.
Technological progress: I’ve used a computer, and played video games, all my life. So I cheered for faster tech, faster CPUs, faster GPUs. Now the GPUs that powered my games instead speed us up towards technological extinction. Oops.
I think I disagree with counter-examples. Dead Hand system was created in a conflict with other countries, it can be viewed as a mostly forced risk. While AI races within companies of a single country are more of a “self-destruction” pattern. Capitalism creates rivals (and therefore races with more risks and less global safety) within one country, more than other economic systems may do so.
.
The Soviets had the Dead Hand system, which potentially contributed to x-risk from “AI” due to the risk of nuclear warfare, not that the system was particularly intelligent.
I strongly doubt that even ideological uniformity would reduce inter-nation competition to zero, and I still doubt that the reduction would be meaningful. Consider that in our timeline, the Soviets and the Chinese had serious border skirmishes that could have escalated further, and did so despite considering the United States to be their primary opponent.
I am not talking about ideological uniformity between two countries. I am talking about events inside 1 country. As I understand, it the core of socialism economics is that government decides where resources of the country go (when in capitalism there are companies who then only pay taxes). Companies can have races with each other. With central planing it’s ~impossible. The problem of international conflicts is more of an another topic.
As of now, for example, neglect of AI safety comes (in a big part) from races between USA companies. (With some exception of china, which is arguably still years behind and doesn’t have enough compute)
I feel like this isn’t a very useful framework in practice. Do we have any reason to believe that alternate frameworks or ideologies such as communism wouldn’t have lead to AGI in a counterfactual world where they were more dominant or lasted longer? The Soviets had the Dead Hand system, which potentially contributed to x-risk from “AI” due to the risk of nuclear warfare, not that the system was particularly intelligent. China is the next closest competitor after the US in the modern AI race (not that it’s particularly communist in practice), and I can envision an alternate timeline where the Soviet Union survived in a communist state to the present date and also embraced modern AI.
More damningly, by disavowing intermediary metrics, you’re making the cut-off for evaluating the success of such an ideology the Heat Death of the universe.
Under this view you can totally have intermediary metrics, they just look more like “how much does your society avoid tragedies of the commons” rather than “what is the median quality of life”.
To be clear, this post was not intended as a subtle endorsement of communism. I agree with MondSemmel’s point that basically any system which produced slower economic growth would probably do better under this view, if only because AI development is slower.
Fair point. I would still say that given a specific level of technological advancement and global industrial capacity/globalization, the difference would be minimal. Consider a counterfactual: a world where Communism was far more successful and globally dominant. I expect that such a world would have had slower growth metrics than ours, perhaps they’d have developed similar transistor technology or our prowess with software engineering decades or even a century later. Conversely, they might well have had a more lax approach to intellectual property rights, such that training data was even easier to appropriate (fewer lawsuits, if any).
Even so, a few decades or even a century is barely any time at all. It’s not like we can easily tell if we’re living in a timeline where humanity advanced faster or slower than it would in aggregate. They might well find themselves in precisely the same position as we do, in terms of relative capabilities and x-risk, just at a slightly different date on the calendar. I can’t think of a strong reason for why a world with different ideologies to ours would have, say, differential focused on AI alignment theory without actual AI models to align. Even LessWrong’s theorizing before LLMs was much more abstract than modern interpretability or capability work in actual labs (this is not the same as claiming it was useless).
Finally, this framework still doesn’t strike me as helpful in practice. Even if we had good reason to think that some other political arrangement would been superior in terms of safety, that doesn’t make it very easy to pivot away. It’s hard enough to get companies and countries to coordinate on AI x-risk today, if we also had to reject modern globalized capitalism in the process, I do not see that working out. That’s today or tomorrow, it’s easy to wish that different historical events might have lead to better outcomes, but even that isn’t amenable to interventions without a time machine.
To rephrase, you find yourself in 2026 looking at machines approaching human intelligence. That strikes you as happening very quickly. I think that even in a counterfactual world where you observed the same thing in 1999 or 2045, it wouldn’t strike you as particularly different. We had a massive compute overhang before transformers came out, relative to the size of models being trained ~2017-2022. You could well be a (for the sake of argument) a Soviet researcher worrying about alignment of Communist-GPT in 2060, wishing that the capitalists had won because their ideology appeared so self-destructive and backwards that you believed it would have held back progress for centuries. We really can’t know, we’ve only got one world to observe, and even if we knew with confidence, we can’t do much about it.
I think OP’s perspective is valid, and I’m not at all convinced by your reply. We’re currently racing towards technological extinction with the utmost efficiency, to the point that it’s hard to imagine that any arbitrary alternative system of economics or governance could be worse by that metric, if only by virtue of producing less economic growth. I don’t see how nuclear warfare results in extinction, either; to my understanding it’s merely a global catastrophic risk, but not an existential one. And regarding your final paragraph, there are a lot of orders of magnitude between a system of governance that self-destructs in <10k years, vs. one that eventually succumbs to the Heat Death of the universe.
Anyway, I made similar comments as OP in a doomy comment from last year:
I think I disagree with counter-examples. Dead Hand system was created in a conflict with other countries, it can be viewed as a mostly forced risk. While AI races within companies of a single country are more of a “self-destruction” pattern. Capitalism creates rivals (and therefore races with more risks and less global safety) within one country, more than other economic systems may do so.
.
I strongly doubt that even ideological uniformity would reduce inter-nation competition to zero, and I still doubt that the reduction would be meaningful. Consider that in our timeline, the Soviets and the Chinese had serious border skirmishes that could have escalated further, and did so despite considering the United States to be their primary opponent.
I am not talking about ideological uniformity between two countries. I am talking about events inside 1 country. As I understand, it the core of socialism economics is that government decides where resources of the country go (when in capitalism there are companies who then only pay taxes). Companies can have races with each other. With central planing it’s ~impossible. The problem of international conflicts is more of an another topic.
As of now, for example, neglect of AI safety comes (in a big part) from races between USA companies. (With some exception of china, which is arguably still years behind and doesn’t have enough compute)