ASI utilizing resources humans don’t value highly (such as the classic zettaflop-scale hyperwaffles, non-Euclidean eigenvalue lubbywubs, recursive metaquine instantiations, and probability-foam negentropics)
One-way value flows: Economic value flowing into ASI systems likely never returns to human markets in recognizable form
If it also values human-legible resources, this seems to posit those flowing to the ASI and never returning, which does not actually seem good for us or the same thing as effective isolation.
Valid concern. If ASI valued the same resources as humans with one-way flow, that would indeed create competition, not separation.
However, this specific failure mode is unlikely for several reasons:
Abundance elsewhere: Human-legible resources exist in vastly greater quantities outside Earth (asteroid belt, outer planets, solar energy in space) making competition inefficient
Intelligence-dependent values: Higher intelligence typically values different resource classes—just as humans value internet memes (thank god for nooscope.osmarks.net), money, and love while bacteria “value” carbon
Synthesis efficiency: Advanced synthesis or alternative acquisition methods would likely require less energy than competing with humans for existing supplies
Negotiated disinterest: Humans have incentives to abandon interest in overlap resources:
ASI demonstrates they have no practical human utility. You really don’t need Hyperwaffles for curing cancer.
Cooperation provides greater value than competition. You can just make your planes out of wood composites instead of aluminium.
That said, the separation model would break down if:
The ASI faces early-stage resource constraints before developing alternatives
Truly irreplaceable, non-substitutable resources existed only in human domains
The ASI’s utility function specifically required consuming human-valued resources
So yes you identify a boundary condition for when separation would fail. The model isn’t inevitable—it depends on resource utilization patterns that enable non-zero-sum outcomes. I personally believe these issues are unlikely in reality.
Abundance elsewhere: Human-legible resources exist in vastly greater quantities outside Earth (asteroid belt, outer planets, solar energy in space) making competition inefficient
It’s harder to get those (starting from Earth) than things on Earth, though.
Intelligence-dependent values: Higher intelligence typically values different resource classes—just as humans value internet memes (thank god for nooscope.osmarks.net), money, and love while bacteria “value” carbon
Satisfying higher-level values has historically required us to do vast amounts of farming and strip-mining and other resource extraction.
Synthesis efficiency: Advanced synthesis or alternative acquisition methods would likely require less energy than competing with humans for existing supplies
It is barely “competition” for an ASI to take human resources. This does not seem plausible for bulk mass-energy.
Negotiated disinterest: Humans have incentives to abandon interest in overlap resources:
Right, but we still need lots of things the ASI also probably wants.
>It’s harder to get those (starting from Earth) than things on Earth, though.
It’s not that much harder, and we can make it harder to extract Earth’s resources (or easier to extract non-earth resources).
>Satisfying higher-level values has historically required us to do vast amounts of farming and strip-mining and other resource extraction.
This is true. However, there are also many organisms that are resilient even to our most brutal forms of farming. We should aim for that level of adaptability ourselves.
>It is barely “competition” for an ASI to take human resources. This does not seem plausible for bulk mass-energy.
This is true, but energy is only really scarce to humans, and even then their mass-energy requirements are absolutely laughable by comparison to the mass-energy in the rest of the cosmos. Earth is only 0.0003% of the total mass-energy in the solar system, and we only need to be marginally harder to disassemble than the rest of mass-energy to buy time.
>Right, but we still need lots of things the ASI also probably wants.
This is true, and it is more true at the early stages where ASI technological developments are roughly the same as those of humans. However, as ASI technology advances, it is possible for it to want inherently different things that we can’t currently comprehend.
If it also values human-legible resources, this seems to posit those flowing to the ASI and never returning, which does not actually seem good for us or the same thing as effective isolation.
Valid concern. If ASI valued the same resources as humans with one-way flow, that would indeed create competition, not separation.
However, this specific failure mode is unlikely for several reasons:
Abundance elsewhere: Human-legible resources exist in vastly greater quantities outside Earth (asteroid belt, outer planets, solar energy in space) making competition inefficient
Intelligence-dependent values: Higher intelligence typically values different resource classes—just as humans value internet memes (thank god for nooscope.osmarks.net), money, and love while bacteria “value” carbon
Synthesis efficiency: Advanced synthesis or alternative acquisition methods would likely require less energy than competing with humans for existing supplies
Negotiated disinterest: Humans have incentives to abandon interest in overlap resources:
ASI demonstrates they have no practical human utility. You really don’t need Hyperwaffles for curing cancer.
Cooperation provides greater value than competition. You can just make your planes out of wood composites instead of aluminium.
That said, the separation model would break down if:
The ASI faces early-stage resource constraints before developing alternatives
Truly irreplaceable, non-substitutable resources existed only in human domains
The ASI’s utility function specifically required consuming human-valued resources
So yes you identify a boundary condition for when separation would fail. The model isn’t inevitable—it depends on resource utilization patterns that enable non-zero-sum outcomes. I personally believe these issues are unlikely in reality.
It’s harder to get those (starting from Earth) than things on Earth, though.
Satisfying higher-level values has historically required us to do vast amounts of farming and strip-mining and other resource extraction.
It is barely “competition” for an ASI to take human resources. This does not seem plausible for bulk mass-energy.
Right, but we still need lots of things the ASI also probably wants.
>It’s harder to get those (starting from Earth) than things on Earth, though.
It’s not that much harder, and we can make it harder to extract Earth’s resources (or easier to extract non-earth resources).
>Satisfying higher-level values has historically required us to do vast amounts of farming and strip-mining and other resource extraction.
This is true. However, there are also many organisms that are resilient even to our most brutal forms of farming. We should aim for that level of adaptability ourselves.
>It is barely “competition” for an ASI to take human resources. This does not seem plausible for bulk mass-energy.
This is true, but energy is only really scarce to humans, and even then their mass-energy requirements are absolutely laughable by comparison to the mass-energy in the rest of the cosmos. Earth is only 0.0003% of the total mass-energy in the solar system, and we only need to be marginally harder to disassemble than the rest of mass-energy to buy time.
>Right, but we still need lots of things the ASI also probably wants.
This is true, and it is more true at the early stages where ASI technological developments are roughly the same as those of humans. However, as ASI technology advances, it is possible for it to want inherently different things that we can’t currently comprehend.