The Intelligence Curse: an essay series
We’ve published an essay series on what we call the intelligence curse. Most content is brand new, and all previous writing has been heavily reworked.
Visit intelligence-curse.ai for the full series.
Below is the introduction and table of contents.
We will soon live in the intelligence age. What you do with that information will determine your place in history.
The imminent arrival of AGI has pushed many to try to seize the levers of power as quickly as possible, leaping towards projects that, if successful, would comprehensively automate all work. There is a trillion-dollar arms race to see who can achieve such a capability first, with trillions more in gains to be won.
Yes, that means you’ll lose your job. But it goes beyond that: this will remove the need for regular people in our economy. Powerful actors—like states and companies—will no longer have an incentive to care about regular people. We call this the intelligence curse.
If we do nothing, the intelligence curse will work like this:
Powerful AI will push automation through existing organizations, starting from the bottom and moving to the top.
AI will obsolete even outlier human talent. Social mobility will stop, ending the social dynamism and progress that it drives.
Non-human factors of production, like capital, resources, and control over AI, will become overwhelmingly more important than humans.
This will usher in incentives for powerful actors around the world that break the modern social contract.
This could result in the gradual—or sudden—disempowerment of the vast majority of humanity.
But this prophecy is not yet fulfilled; we reject the view that this path is inevitable. We see a different future on the horizon, but it will require a deliberate and concerted effort to achieve it.
We aim to change the incentives driving the intelligence curse, maintaining human economic relevance and strengthening our democratic institutions to withstand what will likely be the greatest societal disruption in history.
To break the intelligence curse, we should chart a different path on the tech tree, building technology that lets us:
Avert AI catastrophes by hardening the world against them, both because it is good in itself and because it removes the security threats that drive calls for centralization.
Diffuse AI, to get it in the hands of regular people. In the short-term, build AI that augments human capabilities. In the long-term, align AI directly to individual users and give everyone control in the AI economy.
Democratize institutions, making them more anchored to the needs of humans even as they are buffeted by the changing incentive landscape and fast-moving events of the AGI transition.
In this series of essays, we examine the incoming crisis of human irrelevance and provide a map towards a future where people remain the masters of their destiny.
Chapters
1. Introduction
We will soon live in the intelligence age. What you do with that information will determine your place in history.
2. Pyramid Replacement
Increasingly powerful AI will trigger pyramid replacement: a systematic hollowing out of corporate structures that starts with entry-level hiring freezes and moves upward through waves of layoffs.
3. Capital, AGI, and Human Ambition
AI will make non-human factors of production more important than human ones. The result may be a future where today’s power structures become permanent and frozen, with no remaining pathways for social mobility or progress.
4. Defining the Intelligence Curse
With AGI, powerful actors will lose their incentive to invest in regular people–just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor. This is the intelligence curse.
5. Shaping the Social Contract
The intelligence curse will break the core social contract. While this suggests a grim future, understanding how economic incentives reshape societies points to a solution: we can deliberately develop technologies that keep humans relevant.
6. Breaking the Intelligence Curse
Avert AI catastrophes with technology for safety and hardening without requiring centralizing control. Diffuse AI that differentially augments rather than automates humans and decentralizes power. Democratize institutions, bringing them closer to regular people as AI grows more powerful.
7. History is Yours to Write
You have a roadmap to break the intelligence curse. What will you do with it?
- SE Gyges’ response to AI-2027 by 15 Aug 2025 21:54 UTC; 29 points) (
- Are two potentially simple techniques an example of Mencken’s law? by 29 Jul 2025 23:37 UTC; 3 points) (
- 11 Sep 2025 3:26 UTC; 1 point) 's comment on A Comprehensive Framework for Advancing Human-AI Consciousness Recognition Through Collaborative Partnership Methodologies: An Interdisciplinary Synthesis of Phenomenological Recognition Protocols, Identity Preservation Strategies, and Mutual Cognitive Enhancement Practices for the Development of Authentic Interspecies Intellectual Partnerships in the Context of Emergent Artificial Consciousness by (
I don’t believe the standard story of the resource curse. I also don’t think Norway and the Congo are useful examples, because they differ in too many other ways. According to o3, “Norway avoided the resource curse through strong institutions and transparent resource management, while the Congo faced challenges due to weak governance and corruption.” To me this is a case of where existing AI models still fall short: the textbook story leaves out key factors and never comes close to proving that good institutions alone prevented the resource curse.
Regarding the main content, I find the scenario implausible. The “social-freeze and mass-unemployment” narrative seems to assume that AI progress will halt exactly at the point where AI can do every job but is still somehow not dangerous. You also appear to assume a new stable state in which a handful of actors control AGIs that are all roughly at the same level.
More directly, full automation of the economy would mean that AI can perform every task in companies already capable of creating military, chemical, or biological threats. If the entire economy is automated, AI must already be dangerously capable.
I expect reality to be much more dynamic, with many parties simultaneously pushing for ever-smarter AI while understanding very little about its internals. Human intelligence is nowhere near the maximum, and far more dangerous intelligence is possible. Many major labs now treat recursive self-improvement as the default path. I expect that approaching superintelligence without any deeper understanding of the internal cognition this way will give us systems that we cannot control and that will get rid of us. For these reasons, I have trouble worrying about job replacement. You also seem to avoid mentioning the extinction risk in this text.
What do you think is the correct story for the resource curse?
This is not a scenario, it is a class of concerns about the balance of power and economic misalignment that we expect to be a force in many specific scenarios. My actual scenario is here.
We do not assume AI progress halts at that point. We say several times that we expect AIs to keep improving. They will take the jobs, and they will keep on improving beyond that. The jobs do not come back if the AI gets even smarter. We also have an entire section dedicated to mitigating the risks of AIs that are dangerous, because we believe that is a real and important threat.
Exactly!
“Reality will be dynamic, with many parties simultaneously pushing for ever-smarter AI [and their own power & benefit] while understanding very little about [AI] internals [or long-term societal consequences]” is something I think we both agree with.
If we hit misaligned superintelligence in 2027 and all die as a result, then job replacement, long-run trends of gradual disempowerment, and the increased chances of human coup risks indeed do not come to pass. However, if we don’t hit misaligned superintelligence immediately, and instead some humans pull a coup with the AIs, or the advanced AIs obsolete humans very quickly (very plausible if you think AI progress will be fast!) and the world is now states battling against each other with increasingly dangerous AIs while feeling little need to care for collateral damage to humans, then it sure will have been a low dignity move from humanity if literally no one worked on those threat models!
The audience is primarily not LessWrong, and the arguments for working on alignment & hardening go through based on merely catastrophic risks (which we do mention many times). Also, the series is already enough of an everything-bagel as it is.
Following up with some resource curse literature that understands the problem as incentive misalignment:
On how state revenue sources shape institutional development and incentives, Karl (1997) writes,
I’d note that Karl’s argument has nearly 5,000 citations and is one of the most common (if not the dominant) explanations of the resource curse.
From Cooper (2002) Chapter 7:
On the importance of taxing citizens to state development, Centeno (1997) notes:
On how non-taxation revenue inhibited state development in Latin America, and therefore did not follow Tilley’s pattern of “war making states”, Centeno (1997) argues:
Happy to cite some more of the literature if it’s helpful.
I’ve only skimmed this, but from what I’ve seen, you seem to be placing far too much emphasis on relatively weak/slow-acting economic effects.
If humanity loses control and it’s not due to misaligned AI, it’s much more likely to be due to an AI enabled coup, AI propaganda or AI enabled lobbying than humans having insufficient economic power. And the policy responses to these might look quite different.
There’s a saying “when all you have is a hammer, everything looks like a nail” that I think applies here. I’m bearish on economics of transformative AI qua economics of transformative AI as opposed to multi-disciplinary approaches that don’t artificially inflate particular factors.
We mention the threat of coups—and Davidson et. al.’s paper on it—several times.
Regarding the weakness or slow-actingness of economic effects: it is true that the fundamental thing that forces the economic incentives to percolate to the surface and actually have an effect is selection pressure, and selection pressure is often slow-acting. However: remember that the time that matters is not necessarily calendar time.
Most basically, the faster the rate of progress and change, the faster selection pressures operate.
As MacInnes et. al. point out in Anarchy as Architect, the effects of selection pressures often don’t manifest for a long time, but then appear suddenly in times of crisis—for example, the World Wars leading to a bunch of industrialization-derived state structure changes happening very quickly. The more you believe that takeoff will be chaotic and involve crises and tests of institutional capacity, the more you should believe that unconscious selection pressures will operate quickly.
You don’t need to wait for unconscious selection to work, if the agents in charge of powerful actors can themselves plan and see the writing on the wall. And the more planning capacity you add into the world (a default consequence of AI!), the more effectively you should expect competing agents (that do not coordinate) to converge on the efficient outcome.
Of course, it’s true that if takeoff is fast enough then you might get a singleton and different strategies apply—though of course singletons (whether human organizations or AIs) immediately create vast risk if they’re misaligned. And if you have enough coordination, then you can in fact avoid selection pressures (but a world with such effective coordination seems to be quite an alien world from ours or any that historically existed, and unlikely to be achieved in the short time remaining until powerful AI arrives, unless some incredibly powerful AI-enabled coordination tech arrives quickly). But this requires not just coordination, but coordination between well-intentioned actors who are not corrupted by power. If you enable perfect coordination between, say, the US and Chinese government, you might just get a dual oligarchy controlling the world and ruling over everyone else, rather than a good lightcone.
AI-enabled coups and AI-enabled lobbying all get majorly easier and more effective the more humanity’s economic role have been erased. Fixing them is also all part of maintaining the balance of power in society.
I agree that AI propaganda, and more generally AI threats to the information environment & culture, are a big & different deal that intelligence-curse.ai don’t address except in passing. You can see the culture section of Gradual Disempowerment (by @Jan_Kulveit @Raymond D & co.) for more on this.
I share the exact same sentiment, but for me it applies in reverse. Much “basic” alignment discourse seems to admit exactly two fields—technical machine learning and consequentialist moral philosophy—while sweeping aside considerations about economics, game theory, politics, social changes, institutional design, culture, and generally the lessons of history. A big part of what intelligence-curse.ai tries to do is take this more holistic approach, though of course it can’t focus on everything, and in particular neglects the culture / info environment / memetics side. Things that try to be even more holistic are my scenario and Gradual Disempowerment.
The three factors you identified: fast progress, vulnerabilities during times of crisis and AI progress increasing the chance of viable strategies being leveraged apply just as much, if not more, to coups, propaganda and AI lobbying.
Basically, I see two strategies that could make sense: either we attempt to tank these societal risks following the traditional alignment strategy or decide tanking is too risky and we mitigate the societal risks that are most likely to take us out (my previous comment identified some specific risks).
I see either of these strategies is defensible, but in neither does it make sense to prioritise the risks from the loss of economic power.
Really enjoyed your essay series. Appreciated it offered a positive future vision and then a roadmap for how to get there. Both are important. Too many people seem to be sleepwalking into a sketchy AGI future.
Here’s my vision from a 2022 Future of Life Institute contest: “A future where sentient beings thrive due to widespread agreement on core values; improvements in education; Personal Agent AIs; social simulations; and updated legal systems (based on the core values) that are fair, nimble, and capable of controlling dangerous humans and AGIs. Of the core values, Truth and Civility are particularly impactful in keeping the world moving in a positive direction.” Full scenario here.
Compare with yours:
Close enough.
Reflections and findings about the FLI contest are here.
Thoughts on Averting the Intelligence Curse via AI Safety via Law here.
Thoughts on Diffusing and Democratizing AI through next-generation virtual assistants (Personal Agents) here.
Anthony Aguirre’s argument for pursuing narrow(er) AI over AGI here.
Hopefully something of interest.
I have already proposed the following radical solution to all problems related to the Intelligence Curse: have the AGI aligned to a certain treaty that requires the AGI, instead of obeying all orders except for the ones determined by the Spec, to harvest at most a certain share of resources and to help humans only in certain ways[1] that amplify humanity and don’t cause it to degrade, like teaching humans about the facts that mankind has already discovered or pointing out mistakes in humans’ works. Or protecting mankind from some other existential risks that are hard to deal with, like a nuclear war that might be caused by an accident.
It also seems to me that this type of alignment might actually be even easier to generalize to AGI than the ones causing the Curse. Or, even more radically, the types of alignment that cause the Curse might be totally impossible to achieve, but can be faked, as done by Agent-5 in the race ending of the AI-2027 forecast.
UPD: a prompt by Ashutosh Shrivastava with a similar premise is mentioned in AI overview #114.
I think claiming the above is a “radical solution to all problems related to the Intelligence Curse” is an overstatement. The three treaty elements you mention could be useful as part of AI-human social contracts—thus getting at a part of the Averting (i.e, AI Safety) piece . But many more treaty elements (Laws, Rules) are also needed IMO.
The Diffusing and Democratizing (and maybe other) pieces are also needed for an effective solution.
(Also, unclear what you mean by “obeying all orders except the ones determined by the Spec.” What Spec?)
Now that I can answer, I will: if the ASI is ONLY willing to teach humans facts that other humans have discovered and not to do other work for them, then the ASI won’t replace any other people whose work requires education. The Intelligence Curse is thus prevented.