G.D. as Capitalist Evolution, and the claim for humanity’s (temporary) upper hand

1. The Spark of G.D.

@David Duvenaud’s recent Guardian op‑ed on gradual disempowerment [1] sketches an original foresight: it frames the AI‑alignment problem not as a single catastrophic explosion but as a slow erosion of human leverage. If intelligence is the ability to steer the future, then the central question is: who (or what) is doing the steering while the asphalt is still wet?

Duvenaud’s (motion )picture is compelling precisely because it feels so mundane. We will not wake up to a Skynet banner; we will simply notice, one product launch at a time, that fewer meaningful knobs are within human reach.

2. Capitalism as Cultural Evolution

Many readers immediately point to capitalism itself as the culprit—a cultural heirloom of biological evolution that relentlessly optimizes for profit signals ( and plausibly for any proxy variable that can be operationalized). In that reading, AI is merely the newest accelerant poured onto an ancient feedback loop of mutate‑select‑replicate applied to business plans.

> Thesis‑A: If disempowerment is baked into the evolutionary heuristic of capitalism, the destiny of “ordinary humans” is at best to watch the optimization play out, and at worst to be optimized away.

This framing is seductive, but maybe too personifying. Evolution has no preferences, only selection pressures; capitalism has no will, only markets and shareholders. We risk smuggling agency into faceless processes whenever we say “capitalism wants X” or “evolution wants Y”.

4. Why Bother Remaining Relevant?

A sharp objection arises: If evolution’s invisible hand “prefers” that highly optimized AIs eclipse human agency, by what title/​right do we insist on staying in charge? If human prosperity post-ASI is still prosperity, even by purely evolutionary metrics such as population size or genetic diversity, why cling to control?

My answer invokes the Lindy Effect: systems that survive longest have statistically higher odds of surviving longer still. Human civilizational scaffolding—norms, law, coordination technologies—has a track record stretching thousands of years. It may be messy, but it is Lindy‑approved. An abrupt pivot to opaque optimization engines could erase this hard‑won robustness long before we notice the regress. Survival advantage, in short, accrues to whatever preserves proven structures while adding capabilities at the margin.

Lindy is not a moral proof; it is a probabilistic wager, but I still claim long survival curves trump short‑term efficiency gains.

<Maybe there will be better conclusion here.>

----

[1] David Duvenaud, “The Big Idea: Can We Stop AI Making Humans Obsolete?”, The Guardian, 4 May 2025.


Disclaimer: Thinking human originated in fringe languages, phrasing mostly by o3, reviewed before presenting it here to You.