Many prominent figures, including Sam Altman and Elon Musk, have suggested universal basic income (UBI) as a solution when artificial intelligence renders human labor obsolete. Musk has even promised “universal high income.” However, there are serious reasons to be skeptical of this vision. The framing assumes that alignment will be solved but the AIs are aligned to some elite group or they uphold anything like current capitalism with its property rights.
The Trust Problem
Elon Musk has said that “in the benign scenario, probably none of us will have a job” and promises not just UBI but “Universal High Income.” However, when he took a government role with DOGE, one of his first moves was to defund foreign aid programs including PEPFAR (which provides HIV treatment), as well as programs fighting malaria and tuberculosis.A government memo estimated the cuts could cause millions of additional deaths, with one analysis projecting approximately 600,000 deaths, two thirds of the deaths being children. Bill Gates remarked that “the picture of the world’s richest man killing the world’s poorest children is not a pretty one.” While some of these cuts were later reversed, infectious disease programs for HIV are still reduced to about 30% of pre-cut funding levels.
The Leverage Problem
Your work is your leverage in society. Throughout history, even the worst rulers depended on their subjects for labor and resources. Mutual dependence is why society functions: why you can go to the supermarket and buy food, why housing exists for you, why society is “aligned” to provide you with space to live a decent life. And if you are more useful to others than the average person, you get rewarded proportionally. In a fully automated world, you become a net burden. If we assume that the police and military have also been automated, there is no realistic option to rebel or protest for change either. Resource dependent economies may give us an early glimpse at this, many resource rich countries have an extremely poor population and no democratic institutions.
The Information Problem
If everyone is unemployed, that means all media is automated too. All the information you consume would be AI-generated and AI-curated. We’re already seeing how AI-powered social media algorithms fragment society. These systems, currently optimized merely for engagement, have discovered that lies, misinformation, and conspiracy theories often generate the most interaction.
Now imagine a world where not just the algorithms but all the content itself is AI-generated. Imagine algorithms explicitly designed to mislead and divide. People couldn’t organize their thoughts coherently enough to even conceive of rebellion or imagine alternative social structures.
Caveat: We Probably Won’t Get That Far
It’s extremely difficult to imagine a society where we’re all unemployed and humanity is still around. Think about what’s required for a mass unemployment scenario: an AGI system powerful enough to run society, automate all jobs including research, police and military. Such a system would obviously be lethally dangerous, therefore if we get it catastrophically wrong, we die. We only get one shot at this and without a promising plan the odds don’t look good.
The Bottom Line
Being useful keeps us alive. This is more fundamental than constitutional rights or free speech. In the real world, we’re aligned with other humans largely because we need each other. Our jobs mean we’re still useful to the economy, useful enough that others will provide us with the goods and services they produce.
But more fundamentally: we’re unlikely to survive long enough to face this dilemma. The kind of AGI powerful enough to automate all jobs is the kind of AGI powerful enough to end humanity if misaligned. And we’re rushing toward building it with no credible plan for alignment.
Thank you for writing this! I think a lot of people miss this point, and keep talking about UBI in the AI future without being clear which power bloc will ensure UBI will continue existing, and why.
However, I’d like to make a big correction to this. Your point exactly matches my thinking until a few months ago. Then I realized something that changes it a lot, and is also I think crucial to understand.
Namely, elites have always needed the labor of the masses. The labor of serfs was needed, the labor of slaves was needed. That circumstance kept serfs and slaves alive, but not in an especially good position. The masses were exploited by elites throughout most of history. And it doesn’t depend on economic productivity either: a slave in a diamond mine can have very high productivity by the numbers, but still be enslaved.
The circumstance that changed things, and made the masses in Western countries enjoy (temporarily) a better position than serfs in the past, was the military relevance of the masses. It started with the invention of firearms. A peasant with a gun can be taught to shoot a knight dead, and knights correctly saw even at the time that this would erode their position. I’m not talking about rebellion here (rebellions by the masses against the elites have always been very hard), but rather on whether the masses are needed militarily for large scale conflicts.
And given military relevance, economic productivity isn’t actually that important. It’s possible to have a leisure class that doesn’t do much work except for being militarily relevant; knights are a good example. It’s actually pretty hard to find historical examples of classes that were militarily relevant but treated badly. Even warhorses were treated much better than peasant horses. Being useful keeps you alive, but exploited; being dangerous is what keeps you alive and treated well. If we by some miracle end up with a world where the masses of people remain militarily relevant, but not needed for productive work, then I can imagine the entire masses becoming such a leisure class. That’d be a nice future if we could get it.
However, as you point out, the future will have not just AI labor, but AI armies as well. Ensuring the military relevance of the masses seems just as difficult as ensuring their economic relevance. So my comment, unfortunately, isn’t replacing the problem with an easier one; just with a different one.