I think the Fermi paradox puts a strong constraint on plausible AGI futures.
Fermi solutions collapse into two buckets:
Humanity is first.
There is some universal forcing function that prevents civilizations like ours from producing a large cosmic footprint.
“First” is live, but it is a narrow version of “early,” and I do not think we currently have arguments strong enough to make it the default. Early humanity is easy to defend. First humanity is much harder. Early does reduce the required strength of the forcing function and I grant that “early + weak forcing function” is live but with a modest p.
An analysis of the second bucket leads to a strong case for AGI being that forcing function.
The argument:
Heavy computation is a prerequisite for a large cosmic footprint.
You do not get large-scale space industry without extreme levels of computation for design, robotics, planning, science, coordination, and governance.
Once heavy computation exists, AGI is economically forced before large footprint is achieved.
AGI is the more immediate and general economic prize. It arrives earlier than large-scale space expansion because compute compounds into AGI faster than it compounds into a galaxy-loud footprint.
Humanity is very unlikely to converge on a small footprint on its own. Given history and human nature, the claim that baseline humans would voluntarily sustain a bounded cosmic footprint for long looks extremely hard to defend.
If this is right, then the filter ahead is better understood not as an extinction filter, but as a large-footprint filter. Civilizations like ours do not necessarily disappear; they are filtered out of galaxy-loud futures.
Extinction is one member of this class, but only one. The full class contains:
Eradication
Persuasion
Coercion
All three require dominating AGI: AGI with enough power to enforce a civilization-wide outcome rather than merely advise, assist, or bargain.
Persuasion is the weakest of the three, because the task is not to persuade many humans for a while. It is to secure durable, universal convergence to boundedness across time. Even for a superintelligence, that seems like a much narrower path than simply preventing deviation.
Eradication also seems weaker than coercion, though for a different reason. It requires AGI to both choose eradication and to choose to contain itself to a small footprint. That is possible, but it seems consistent with a narrower set of motivations/values.
Coercion looks like the most generic substrate. If AGI converges on any regime that treats open-ended expansion as dangerous, immoral, destabilizing, or value-corrupting, then coercive suppression of expansion is the natural outcome.
This has an uncomfortable implication for alignment. A lot of alignment work seems implicitly aimed at non-dominating AGI: systems that help humans pursue their own ambitions, preserve broad human agency, and allow open-ended flourishing. I think the Fermi paradox is evidence against that class of futures being possible.
If non-dominating AGI were a robust path to long-run flourishing, the sky would look different.
So alignment work that assumes we can get stable good futures without domination should bear a very heavy burden of proof. Alignment work that aims for persuasion-only solutions should also be viewed skeptically.
The more realistic target may be narrower. Shaping which form of domination wins. Making coercive boundedness more likely than eradication, and making the coercive regime more humane, stable, and predictable.
That is not a pleasant conclusion. But I think it is more consistent with the Fermi paradox than most of the current alignment picture.
The Fermi Paradox Implies Domination
I think the Fermi paradox puts a strong constraint on plausible AGI futures.
Fermi solutions collapse into two buckets:
Humanity is first.
There is some universal forcing function that prevents civilizations like ours from producing a large cosmic footprint.
“First” is live, but it is a narrow version of “early,” and I do not think we currently have arguments strong enough to make it the default. Early humanity is easy to defend. First humanity is much harder. Early does reduce the required strength of the forcing function and I grant that “early + weak forcing function” is live but with a modest p.
An analysis of the second bucket leads to a strong case for AGI being that forcing function.
The argument:
Heavy computation is a prerequisite for a large cosmic footprint.
You do not get large-scale space industry without extreme levels of computation for design, robotics, planning, science, coordination, and governance.
Once heavy computation exists, AGI is economically forced before large footprint is achieved.
AGI is the more immediate and general economic prize. It arrives earlier than large-scale space expansion because compute compounds into AGI faster than it compounds into a galaxy-loud footprint.
Humanity is very unlikely to converge on a small footprint on its own. Given history and human nature, the claim that baseline humans would voluntarily sustain a bounded cosmic footprint for long looks extremely hard to defend.
If this is right, then the filter ahead is better understood not as an extinction filter, but as a large-footprint filter. Civilizations like ours do not necessarily disappear; they are filtered out of galaxy-loud futures.
Extinction is one member of this class, but only one. The full class contains:
Eradication
Persuasion
Coercion
All three require dominating AGI: AGI with enough power to enforce a civilization-wide outcome rather than merely advise, assist, or bargain.
Persuasion is the weakest of the three, because the task is not to persuade many humans for a while. It is to secure durable, universal convergence to boundedness across time. Even for a superintelligence, that seems like a much narrower path than simply preventing deviation.
Eradication also seems weaker than coercion, though for a different reason. It requires AGI to both choose eradication and to choose to contain itself to a small footprint. That is possible, but it seems consistent with a narrower set of motivations/values.
Coercion looks like the most generic substrate. If AGI converges on any regime that treats open-ended expansion as dangerous, immoral, destabilizing, or value-corrupting, then coercive suppression of expansion is the natural outcome.
This has an uncomfortable implication for alignment. A lot of alignment work seems implicitly aimed at non-dominating AGI: systems that help humans pursue their own ambitions, preserve broad human agency, and allow open-ended flourishing. I think the Fermi paradox is evidence against that class of futures being possible.
If non-dominating AGI were a robust path to long-run flourishing, the sky would look different.
So alignment work that assumes we can get stable good futures without domination should bear a very heavy burden of proof. Alignment work that aims for persuasion-only solutions should also be viewed skeptically.
The more realistic target may be narrower. Shaping which form of domination wins. Making coercive boundedness more likely than eradication, and making the coercive regime more humane, stable, and predictable.
That is not a pleasant conclusion. But I think it is more consistent with the Fermi paradox than most of the current alignment picture.