Summary: Power concentration via AI is likely a greater near-term risk than classical misalignment, primarily because (a) it converges under a wider range of AI development scenarios including ones where alignment succeeds, (b) the timelines are faster and more empirically grounded, and (c) the three mechanisms society would normally use to correct it (public awareness, legislation, and democratic pressure) interact in a way that makes them structurally insufficient given realistic displacement speeds. I present a simulation calibrated to current empirical data that suggests the intervention window closes around 2027 under base-case assumptions. I assign >50% probability to stable autocracy being the default attractor of the current trajectory. I expect this post to be wrong in specific ways I will enumerate; I’d value pushback on the model assumptions more than the framing.
Why This Is Different From Prior AGI Risk Discussions
The 80K problem profile on extreme power concentration is good, power grabs, and I won’t rehash it. The 80K piece correctly identifies the risk and notes that gradual disempowerment dynamics are underexplored relative to deliberate powergrabs. This post is specifically about the disempowerment pathway and its interaction with societal reaction mechanisms. The key claim I want to add to the existing discussion is not “concentration is bad”, that’s established, but rather: the dynamics by which concentration happens and by which society would normally correct it make the outcome much more deterministic than the LW community currently treats it.
The 80K piece lists interventions that could help. This post argues that those interventions operate on timescales and through mechanisms that are structurally insufficient given empirically observed displacement speeds, and that the interaction of awareness lag, political lag, and elite capture creates a convergent failure mode that most analyses treat as three separate problems rather than one compounding system.
The Core Argument
1. Concentration is the more convergent risk
Alignment failure requires AI systems to pursue misspecified objectives in ways that are both powerful enough to matter and resistant enough to correction to actually cause catastrophe. This is a real risk, but it requires a specific conjunction of conditions.
Concentration requires only that:
AI significantly increases the productivity advantage of those who control it (already happening)
That advantage compounds through market dynamics (demonstrably happening)
That existing political correction mechanisms are too slow or too capturable to interrupt the compounding before it reaches a self-reinforcing threshold
This is a much weaker set of conditions. It does not require AI to be misaligned. It does not require bad actors. It happens as the output of individually rational behavior by well-intentioned people operating inside normal economic incentive structures. An AI lab CEO who genuinely wants good outcomes for humanity can, through entirely reasonable decisions, participate in a process that concentrates power to a degree that makes “good outcomes for humanity” no longer a decision they can deliver.
More importantly, alignment success does not reduce concentration risk; it may increase it. A perfectly obedient AI that reliably executes operator preferences is a more effective instrument of concentration than a partially reliable one. The scenarios where we solve alignment and then things go fine are a subset of the scenarios where AI is safe; the scenarios where we solve alignment and concentration proceeds unchecked are not obviously a small subset.
2. The R&D feedback loop is an empirically observed phenomenon, not a theoretical concern
METR’s March 2025 measurements show that the length of tasks AI agents can autonomously complete has been doubling approximately every 7 months for 6 years. Epoch AI documents algorithmic efficiency improving at ~3x/year. Frontier training compute is growing ~5x/year. These are not projections; they are measurements.
The relevant implication is not just that AI is getting better, it’s that the improvement rate is itself accelerating. A lab that is even slightly ahead today does not maintain a constant lead; it rides a steeper curve than its competitors. The logarithmic divergence between a leading and trailing lab grows every month. This creates winner-takes-most market dynamics not because anyone designed it that way, but because the feedback structure of AI R&D makes it mathematically near-inevitable given current improvement rates.
3. Labor displacement removes the structural basis for democratic correction
This is the part of the argument I expect the most pushback on, and where I am most uncertain.
The standard model of democratic correction assumes that concentrated power generates political resistance because the people being harmed retain some form of leverage. Workers can strike. Consumers can boycott. Citizens can vote. These mechanisms have successfully checked concentrated power throughout history, such as Standard Oil, AT&T, and early-20th-century financial trusts.
All of these mechanisms depend on human labor being scarce enough to matter economically. A population that provides no economic input that cannot be replicated more cheaply by AI systems has no leverage in any negotiation. This is not a distributional point about income but about power. The concern is not that people will be poor (though they will be), but that they will simultaneously discover their bargaining position has disappeared and that the political system no longer has adequate incentive to represent their interests.
McKinsey (Nov 2025) estimates 57% of US work hours are automatable with current technology. Anthropic researchers have publicly stated near-full white-collar automation by 2027-28. I’m uncertain about blue-collar timelines: humanoid robotics progress is real but Ken Goldberg and others are credible skeptics of 3-5 year timelines. I’ll use a range rather than a point estimate.
The Simulation
I built a quantitative model with the following structure. The full interactive version is available here. This section describes the architecture; I’ll focus on the non-obvious parts.
Concentration side: Lab power share grows as a function of capability lead (modeled from METR doubling-time data) × displaced labor value captured × market share. The R&D feedback multiplier is the key parameter: it determines how much a capability lead compounds versus stays constant.
Reaction side: Three mechanisms, modeled as interacting differential equations:
Awareness: Builds as a logistic function of visible unemployment, delayed by 1-3 years. The delay reflects well-documented psychological lags in recognizing individual misfortune as a systemic crisis. I use 1.5yr as baseline.
Policy: Builds from awareness, further delayed by political lag (baseline 2.5yr, based on historical tech regulation timelines — GDPR: 4yr, Sherman Act: 6yr from public pressure to enactment). Policy strength is also scaled by an effectiveness parameter representing how much enacted legislation actually constrains behavior.
Elite capture: This is the key mechanism. As lab power share grows, lobbying spend grows exponentially: empirically, OpenAI’s lobbying grew 7x in one year (2023→2024); Big Tech combined was spending $400K/day in Congress by H1 2025; Nvidia increased lobbying 388% year-over-year. Elite capture erodes the practical effect of enacted legislation, up to a ceiling representing maximum regulatory capture. The capture rate is calibrated to observed lobbying growth curves.
The critical interaction: These mechanisms don’t add — they multiply. Elite capture doesn’t just slow new legislation; it retroactively erodes enacted legislation through enforcement gaps, carve-outs, and revolving-door appointments. This means that even as awareness rises and policy is passed, net reaction force (policy strength × (1 - elite capture)) can stay near zero or turn negative. I call this the reaction illusion — the system appears to be responding while the practical constraint on concentration remains near zero.
Key outputs under base-case parameters:
Intervention window closes: ~2027
Crisis threshold (60% lab power share): ~2031 under your displacement scenario, ~2038 under Amodei’s public estimates, ~2041 under Davos consensus
Net reaction force at crisis: <8% in all scenarios
Important caveats on the model: It assumes smooth exponential dynamics; real systems have friction that creates discontinuities. It treats the lab cluster as relatively unified; US-China competition is a real fracture. It does not model energy constraints on compute, which some credible researchers argue are binding. The 60% threshold is a modeling choice, not a derived quantity. I would be surprised if the model is directionally wrong, but would not be surprised if specific timelines are off by 3-5 years.
The Main Objections and My Responses
Objection 1: Historical precedent shows thatdemocracies eventually correct concentrated power
This is the strongest objection, and I take it seriously. Standard Oil, AT&T, the New Deal — concentrated power has been successfully checked before.
My response: Each historical case of successful correction had conditions that do not hold here. An independent press with no financial relationship to the company is being broken up. Legislators with minimal financial entanglement. A public experiencing visceral, attributable daily harm from a specific, identifiable entity. Most importantly, the affected population retained economic leverage throughout the period of organizing.
The specific mechanism by which this case differs is the simultaneous removal of the economic leverage and the arrival of awareness. In every prior case, people organized while they still could impose costs. In this case, the shock that generates awareness is the same event that removes the ability to impose costs. I don’t know of a historical analog for this conjunction.
Objection 2: Elite capture is endogenous — as it gets worse, the political salience of opposing it increases
This is a real mechanism, and I may be underweighting it. Some political entrepreneurs benefit from opposing concentrated tech power — left-wing populists, small-business conservatives, foreign governments worried about US tech dominance. These coalitions have historically formed and been effective.
My response: The mechanism exists, but the question is timing. The coalition needed to pass meaningful anti-capture legislation (campaign finance reform, lobbying restrictions, cooling-off periods) and form and win before the labs have enough political power to prevent it. Given current lobbying trajectories and the 2-4 year political lag on legislation, I’m skeptical this happens in the available window. I would update significantly on evidence that anti-capture legislation was advancing rapidly.
Objection 3: The US-China dynamic prevents any single entity from reaching the concentration threshold
This is the strongest empirical objection to my specific model. If global AI capability is split between two competing blocs rather than concentrated in one, the stable autocracy scenario may not obtain. We might instead get competing concentrated powers, which is bad in different ways but does not fit the model’s predictions.
My response: I think this is a genuine uncertainty that my model underweights. Geopolitical fracture is a real countervailing force. However, I note that within each bloc, the concentration dynamics still apply, and a world where two power-concentrated blocs compete may be almost as bad as a world with one. I’d appreciate pushback on whether the bifurcated scenario is stable and what it implies.
Objection 4: Open-source AI prevents concentration
Open-source model weights (Llama, Mistral, Qwen) are a real distributional mechanism. I take this seriously.
My response: open weights distribute capability but not the infrastructure to deploy it at scale. The bottleneck for economic power is not access to model weights — it’s access to compute, data, distribution, and capital. All of these remain concentrated regardless of weight licensing. Open-source helps with the lab concentration component; it does not help with the capital concentration component that emerges as labor is displaced.
What Would Change My Mind
I want to be explicit about this because I think the LW community is right to be skeptical of unfalsifiable doom arguments.
I would substantially revise my probability estimate downward if:
Anti-capture legislation (meaningful lobbying restrictions, campaign finance reform for tech companies) passed in major jurisdictions before 2027. This would suggest the political lag is shorter than I modeled.
Blue-collar automation timelines extended significantly beyond 2035 (say, credible expert consensus on >15 years for majority displacement). This reduces the rate at which labor’s bargaining power collapses.
Energy constraints on compute scaling proved binding in the 3-5 year horizon, significantly slowing the R&D feedback loop.
The US-China bifurcation hardened sufficiently to prevent either lab cluster from achieving >40% of global effective AI capability.
A successful antitrust structural remedy was applied to a major AI lab before it reached a dominant market position. This would update me on whether the intervention mechanisms are faster than I modeled.
I would revise upward if:
Lobbying spend continues growing >3x/year for another 2 years
White-collar displacement visibly accelerates in 2025-26 without legislative response
International AI coordination mechanisms fail to materialize by 2026
The Paths That Could Work
I don’t want this to read as pure doom posting, so I’ll briefly name the interventions that actually move the needle in the model.
Anti-capture reform (window: now-2026): Structural limits on AI company lobbying and political donations, cooling-off periods for executives moving between labs and regulatory agencies. This needs to happen before the labs have the political power to prevent it. According to the model, this is the highest-leverage single intervention.
Pre-emptive ownership restructuring (window: now-2027): Citizen equity stakes in AI infrastructure, such as sovereign AI funds, public compute commons, mandatory equity for displaced workers, implemented while labor still has the political power to demand it. The Norway model was applied before the resource curse rather than during.
Compressed political response mechanisms: Standing regulatory authority that doesn’t require new legislation per intervention. Reduces political lag from 3-5 years to under 18 months. This is a process reform rather than a policy reform.
International coordination (window: closing): Treaty-based compute monitoring, mandatory multi-polar participation in frontier AI. The window here is approximately correlated with US domestic political capture. Once a captured US government actively opposes international coordination, the window closes. Signs of this are already present as of 2025.
The key insight the model produces is that these interventions need to be implemented simultaneously, not sequentially, and they need to happen in the next 2-3 years. Each year of delay increases the elite capture erosion that any intervention must overcome.
My Probability Estimate
I assign approximately 55-65% probability to an outcome I would describe as stable autocracy or functional equivalent within 20 years, defined as: a small cluster of entities (≤10) controlling >60% of effective global economic power in a way that is self-reinforcing and not reversible through conventional democratic mechanisms.
The main sources of my uncertainty:
Geopolitical fracture (US-China) as a structural countervailing force: I think this is underweighted in my model. ~15% probability mass on a “competing autocracies” outcome that is bad but different.
Non-linear awareness cascade: historically rare but real. Something like mass recognition of the problem is happening faster than the capture mechanism can adapt. ~10% probability mass.
Binding physical constraints on AI scaling (energy, compute): ~10% probability of significantly slowing the R&D feedback loop.
Successful early intervention: I am pessimistic but not certain. ~15% probability of meaningful structural reform in the available window, given political will materializing faster than my model suggests.
These are rough, probably overconfident, given my uncertainty about the model, and I’d value serious engagement with the probability estimates specifically.
What I’d Most Value From This Discussion
Pushback on the model architecture: specifically, the elite capture mechanism and whether I’m correctly modeling its interaction with policy effectiveness. This is the least empirically grounded component.
Evidence on blue-collar automation timelines: this is where my uncertainty is highest and where the outcome is most sensitive.
Historical analogs: cases where concentrated economic power was successfully checked after the economic leverage of the affected population had already been substantially reduced. I don’t know of any, but I may be missing cases.
The bifurcated scenario: whether US-China competition prevents the single-entity concentration I model, and what the bifurcated outcome looks like structurally.
Prior work I’ve missed: I’m aware of the 80K profile and the 80K Hours AI-enabled coups report. I’m less aware of quantitative modeling work on the disempowerment pathway specifically. Pointers welcome.
The interactive simulation referenced in this post models the full system, including all three reaction mechanisms and allows parameter sensitivity analysis. It is calibrated to METR 2025 capability measurements, Epoch AI efficiency data, McKinsey displacement estimates, and observed lobbying growth rates.
Concentration Risk Is Probably More Important Than Alignment Risk, And It’s Heading to a Doom Scenario
Summary: Power concentration via AI is likely a greater near-term risk than classical misalignment, primarily because (a) it converges under a wider range of AI development scenarios including ones where alignment succeeds, (b) the timelines are faster and more empirically grounded, and (c) the three mechanisms society would normally use to correct it (public awareness, legislation, and democratic pressure) interact in a way that makes them structurally insufficient given realistic displacement speeds. I present a simulation calibrated to current empirical data that suggests the intervention window closes around 2027 under base-case assumptions. I assign >50% probability to stable autocracy being the default attractor of the current trajectory. I expect this post to be wrong in specific ways I will enumerate; I’d value pushback on the model assumptions more than the framing.
Why This Is Different From Prior AGI Risk Discussions
The 80K problem profile on extreme power concentration is good, power grabs, and I won’t rehash it. The 80K piece correctly identifies the risk and notes that gradual disempowerment dynamics are underexplored relative to deliberate powergrabs. This post is specifically about the disempowerment pathway and its interaction with societal reaction mechanisms. The key claim I want to add to the existing discussion is not “concentration is bad”, that’s established, but rather: the dynamics by which concentration happens and by which society would normally correct it make the outcome much more deterministic than the LW community currently treats it.
The 80K piece lists interventions that could help. This post argues that those interventions operate on timescales and through mechanisms that are structurally insufficient given empirically observed displacement speeds, and that the interaction of awareness lag, political lag, and elite capture creates a convergent failure mode that most analyses treat as three separate problems rather than one compounding system.
The Core Argument
1. Concentration is the more convergent risk
Alignment failure requires AI systems to pursue misspecified objectives in ways that are both powerful enough to matter and resistant enough to correction to actually cause catastrophe. This is a real risk, but it requires a specific conjunction of conditions.
Concentration requires only that:
AI significantly increases the productivity advantage of those who control it (already happening)
That advantage compounds through market dynamics (demonstrably happening)
That existing political correction mechanisms are too slow or too capturable to interrupt the compounding before it reaches a self-reinforcing threshold
This is a much weaker set of conditions. It does not require AI to be misaligned. It does not require bad actors. It happens as the output of individually rational behavior by well-intentioned people operating inside normal economic incentive structures. An AI lab CEO who genuinely wants good outcomes for humanity can, through entirely reasonable decisions, participate in a process that concentrates power to a degree that makes “good outcomes for humanity” no longer a decision they can deliver.
More importantly, alignment success does not reduce concentration risk; it may increase it. A perfectly obedient AI that reliably executes operator preferences is a more effective instrument of concentration than a partially reliable one. The scenarios where we solve alignment and then things go fine are a subset of the scenarios where AI is safe; the scenarios where we solve alignment and concentration proceeds unchecked are not obviously a small subset.
2. The R&D feedback loop is an empirically observed phenomenon, not a theoretical concern
METR’s March 2025 measurements show that the length of tasks AI agents can autonomously complete has been doubling approximately every 7 months for 6 years. Epoch AI documents algorithmic efficiency improving at ~3x/year. Frontier training compute is growing ~5x/year. These are not projections; they are measurements.
The relevant implication is not just that AI is getting better, it’s that the improvement rate is itself accelerating. A lab that is even slightly ahead today does not maintain a constant lead; it rides a steeper curve than its competitors. The logarithmic divergence between a leading and trailing lab grows every month. This creates winner-takes-most market dynamics not because anyone designed it that way, but because the feedback structure of AI R&D makes it mathematically near-inevitable given current improvement rates.
3. Labor displacement removes the structural basis for democratic correction
This is the part of the argument I expect the most pushback on, and where I am most uncertain.
The standard model of democratic correction assumes that concentrated power generates political resistance because the people being harmed retain some form of leverage. Workers can strike. Consumers can boycott. Citizens can vote. These mechanisms have successfully checked concentrated power throughout history, such as Standard Oil, AT&T, and early-20th-century financial trusts.
All of these mechanisms depend on human labor being scarce enough to matter economically. A population that provides no economic input that cannot be replicated more cheaply by AI systems has no leverage in any negotiation. This is not a distributional point about income but about power. The concern is not that people will be poor (though they will be), but that they will simultaneously discover their bargaining position has disappeared and that the political system no longer has adequate incentive to represent their interests.
McKinsey (Nov 2025) estimates 57% of US work hours are automatable with current technology. Anthropic researchers have publicly stated near-full white-collar automation by 2027-28. I’m uncertain about blue-collar timelines: humanoid robotics progress is real but Ken Goldberg and others are credible skeptics of 3-5 year timelines. I’ll use a range rather than a point estimate.
The Simulation
I built a quantitative model with the following structure. The full interactive version is available here. This section describes the architecture; I’ll focus on the non-obvious parts.
Concentration side: Lab power share grows as a function of capability lead (modeled from METR doubling-time data) × displaced labor value captured × market share. The R&D feedback multiplier is the key parameter: it determines how much a capability lead compounds versus stays constant.
Reaction side: Three mechanisms, modeled as interacting differential equations:
Awareness: Builds as a logistic function of visible unemployment, delayed by 1-3 years. The delay reflects well-documented psychological lags in recognizing individual misfortune as a systemic crisis. I use 1.5yr as baseline.
Policy: Builds from awareness, further delayed by political lag (baseline 2.5yr, based on historical tech regulation timelines — GDPR: 4yr, Sherman Act: 6yr from public pressure to enactment). Policy strength is also scaled by an effectiveness parameter representing how much enacted legislation actually constrains behavior.
Elite capture: This is the key mechanism. As lab power share grows, lobbying spend grows exponentially: empirically, OpenAI’s lobbying grew 7x in one year (2023→2024); Big Tech combined was spending $400K/day in Congress by H1 2025; Nvidia increased lobbying 388% year-over-year. Elite capture erodes the practical effect of enacted legislation, up to a ceiling representing maximum regulatory capture. The capture rate is calibrated to observed lobbying growth curves.
The critical interaction: These mechanisms don’t add — they multiply. Elite capture doesn’t just slow new legislation; it retroactively erodes enacted legislation through enforcement gaps, carve-outs, and revolving-door appointments. This means that even as awareness rises and policy is passed, net reaction force (policy strength × (1 - elite capture)) can stay near zero or turn negative. I call this the reaction illusion — the system appears to be responding while the practical constraint on concentration remains near zero.
Key outputs under base-case parameters:
Intervention window closes: ~2027
Crisis threshold (60% lab power share): ~2031 under your displacement scenario, ~2038 under Amodei’s public estimates, ~2041 under Davos consensus
Net reaction force at crisis: <8% in all scenarios
Important caveats on the model: It assumes smooth exponential dynamics; real systems have friction that creates discontinuities. It treats the lab cluster as relatively unified; US-China competition is a real fracture. It does not model energy constraints on compute, which some credible researchers argue are binding. The 60% threshold is a modeling choice, not a derived quantity. I would be surprised if the model is directionally wrong, but would not be surprised if specific timelines are off by 3-5 years.
The Main Objections and My Responses
Objection 1: Historical precedent shows that democracies eventually correct concentrated power
This is the strongest objection, and I take it seriously. Standard Oil, AT&T, the New Deal — concentrated power has been successfully checked before.
My response: Each historical case of successful correction had conditions that do not hold here. An independent press with no financial relationship to the company is being broken up. Legislators with minimal financial entanglement. A public experiencing visceral, attributable daily harm from a specific, identifiable entity. Most importantly, the affected population retained economic leverage throughout the period of organizing.
The specific mechanism by which this case differs is the simultaneous removal of the economic leverage and the arrival of awareness. In every prior case, people organized while they still could impose costs. In this case, the shock that generates awareness is the same event that removes the ability to impose costs. I don’t know of a historical analog for this conjunction.
Objection 2: Elite capture is endogenous — as it gets worse, the political salience of opposing it increases
This is a real mechanism, and I may be underweighting it. Some political entrepreneurs benefit from opposing concentrated tech power — left-wing populists, small-business conservatives, foreign governments worried about US tech dominance. These coalitions have historically formed and been effective.
My response: The mechanism exists, but the question is timing. The coalition needed to pass meaningful anti-capture legislation (campaign finance reform, lobbying restrictions, cooling-off periods) and form and win before the labs have enough political power to prevent it. Given current lobbying trajectories and the 2-4 year political lag on legislation, I’m skeptical this happens in the available window. I would update significantly on evidence that anti-capture legislation was advancing rapidly.
Objection 3: The US-China dynamic prevents any single entity from reaching the concentration threshold
This is the strongest empirical objection to my specific model. If global AI capability is split between two competing blocs rather than concentrated in one, the stable autocracy scenario may not obtain. We might instead get competing concentrated powers, which is bad in different ways but does not fit the model’s predictions.
My response: I think this is a genuine uncertainty that my model underweights. Geopolitical fracture is a real countervailing force. However, I note that within each bloc, the concentration dynamics still apply, and a world where two power-concentrated blocs compete may be almost as bad as a world with one. I’d appreciate pushback on whether the bifurcated scenario is stable and what it implies.
Objection 4: Open-source AI prevents concentration
Open-source model weights (Llama, Mistral, Qwen) are a real distributional mechanism. I take this seriously.
My response: open weights distribute capability but not the infrastructure to deploy it at scale. The bottleneck for economic power is not access to model weights — it’s access to compute, data, distribution, and capital. All of these remain concentrated regardless of weight licensing. Open-source helps with the lab concentration component; it does not help with the capital concentration component that emerges as labor is displaced.
What Would Change My Mind
I want to be explicit about this because I think the LW community is right to be skeptical of unfalsifiable doom arguments.
I would substantially revise my probability estimate downward if:
Anti-capture legislation (meaningful lobbying restrictions, campaign finance reform for tech companies) passed in major jurisdictions before 2027. This would suggest the political lag is shorter than I modeled.
Blue-collar automation timelines extended significantly beyond 2035 (say, credible expert consensus on >15 years for majority displacement). This reduces the rate at which labor’s bargaining power collapses.
Energy constraints on compute scaling proved binding in the 3-5 year horizon, significantly slowing the R&D feedback loop.
The US-China bifurcation hardened sufficiently to prevent either lab cluster from achieving >40% of global effective AI capability.
A successful antitrust structural remedy was applied to a major AI lab before it reached a dominant market position. This would update me on whether the intervention mechanisms are faster than I modeled.
I would revise upward if:
Lobbying spend continues growing >3x/year for another 2 years
White-collar displacement visibly accelerates in 2025-26 without legislative response
International AI coordination mechanisms fail to materialize by 2026
The Paths That Could Work
I don’t want this to read as pure doom posting, so I’ll briefly name the interventions that actually move the needle in the model.
Anti-capture reform (window: now-2026): Structural limits on AI company lobbying and political donations, cooling-off periods for executives moving between labs and regulatory agencies. This needs to happen before the labs have the political power to prevent it. According to the model, this is the highest-leverage single intervention.
Pre-emptive ownership restructuring (window: now-2027): Citizen equity stakes in AI infrastructure, such as sovereign AI funds, public compute commons, mandatory equity for displaced workers, implemented while labor still has the political power to demand it. The Norway model was applied before the resource curse rather than during.
Compressed political response mechanisms: Standing regulatory authority that doesn’t require new legislation per intervention. Reduces political lag from 3-5 years to under 18 months. This is a process reform rather than a policy reform.
International coordination (window: closing): Treaty-based compute monitoring, mandatory multi-polar participation in frontier AI. The window here is approximately correlated with US domestic political capture. Once a captured US government actively opposes international coordination, the window closes. Signs of this are already present as of 2025.
The key insight the model produces is that these interventions need to be implemented simultaneously, not sequentially, and they need to happen in the next 2-3 years. Each year of delay increases the elite capture erosion that any intervention must overcome.
My Probability Estimate
I assign approximately 55-65% probability to an outcome I would describe as stable autocracy or functional equivalent within 20 years, defined as: a small cluster of entities (≤10) controlling >60% of effective global economic power in a way that is self-reinforcing and not reversible through conventional democratic mechanisms.
The main sources of my uncertainty:
Geopolitical fracture (US-China) as a structural countervailing force: I think this is underweighted in my model. ~15% probability mass on a “competing autocracies” outcome that is bad but different.
Non-linear awareness cascade: historically rare but real. Something like mass recognition of the problem is happening faster than the capture mechanism can adapt. ~10% probability mass.
Binding physical constraints on AI scaling (energy, compute): ~10% probability of significantly slowing the R&D feedback loop.
Successful early intervention: I am pessimistic but not certain. ~15% probability of meaningful structural reform in the available window, given political will materializing faster than my model suggests.
These are rough, probably overconfident, given my uncertainty about the model, and I’d value serious engagement with the probability estimates specifically.
What I’d Most Value From This Discussion
Pushback on the model architecture: specifically, the elite capture mechanism and whether I’m correctly modeling its interaction with policy effectiveness. This is the least empirically grounded component.
Evidence on blue-collar automation timelines: this is where my uncertainty is highest and where the outcome is most sensitive.
Historical analogs: cases where concentrated economic power was successfully checked after the economic leverage of the affected population had already been substantially reduced. I don’t know of any, but I may be missing cases.
The bifurcated scenario: whether US-China competition prevents the single-entity concentration I model, and what the bifurcated outcome looks like structurally.
Prior work I’ve missed: I’m aware of the 80K profile and the 80K Hours AI-enabled coups report. I’m less aware of quantitative modeling work on the disempowerment pathway specifically. Pointers welcome.
The interactive simulation referenced in this post models the full system, including all three reaction mechanisms and allows parameter sensitivity analysis. It is calibrated to METR 2025 capability measurements, Epoch AI efficiency data, McKinsey displacement estimates, and observed lobbying growth rates.