G.D. as Capitalist Evolution, and the claim for humanity’s (temporary) upper hand
1. The Spark of G.D.
@David Duvenaud’s recent Guardian op‑ed on gradual disempowerment [1] sketches an original foresight: it frames the AI‑alignment problem not as a single catastrophic explosion but as a slow erosion of human leverage. If intelligence is the ability to steer the future, then the central question is: who (or what) is doing the steering while the asphalt is still wet?
Duvenaud’s (motion )picture is compelling precisely because it feels so mundane. We will not wake up to a Skynet banner; we will simply notice, one product launch at a time, that fewer meaningful knobs are within human reach.
2. Capitalism as Cultural Evolution
Many readers immediately point to capitalism itself as the culprit—a cultural heirloom of biological evolution that relentlessly optimizes for profit signals ( and plausibly for any proxy variable that can be operationalized). In that reading, AI is merely the newest accelerant poured onto an ancient feedback loop of mutate‑select‑replicate applied to business plans.
> Thesis‑A: If disempowerment is baked into the evolutionary heuristic of capitalism, the destiny of “ordinary humans” is at best to watch the optimization play out, and at worst to be optimized away.
This framing is seductive, but maybe too personifying. Evolution has no preferences, only selection pressures; capitalism has no will, only markets and shareholders. We risk smuggling agency into faceless processes whenever we say “capitalism wants X” or “evolution wants Y”.
4. Why Bother Remaining Relevant?
A sharp objection arises: If evolution’s invisible hand “prefers” that highly optimized AIs eclipse human agency, by what title/right do we insist on staying in charge? If human prosperity post-ASI is still prosperity, even by purely evolutionary metrics such as population size or genetic diversity, why cling to control?
My answer invokes the Lindy Effect: systems that survive longest have statistically higher odds of surviving longer still. Human civilizational scaffolding—norms, law, coordination technologies—has a track record stretching thousands of years. It may be messy, but it is Lindy‑approved. An abrupt pivot to opaque optimization engines could erase this hard‑won robustness long before we notice the regress. Survival advantage, in short, accrues to whatever preserves proven structures while adding capabilities at the margin.
Lindy is not a moral proof; it is a probabilistic wager, but I still claim long survival curves trump short‑term efficiency gains.
<Maybe there will be better conclusion here.>
----
[1] David Duvenaud, “The Big Idea: Can We Stop AI Making Humans Obsolete?”, The Guardian, 4 May 2025.
Disclaimer: Thinking human originated in fringe languages, phrasing mostly by o3, reviewed before presenting it here to You.
Thanks for the summary. I really like your phrasing “We will not wake up to a Skynet banner; we will simply notice, one product launch at a time, that fewer meaningful knobs are within human reach.”
But as for “by what title/right do we insist on staying in charge?” I find it odd to act as if there is some external moral frame that we need to satisfy to maintain power. By what right does a bear catch a fish? Or a mother feed her child? I hope that a moral frame comprehensive enough to include humans is sufficiently compelling to future AIs to make them treat us well, but I don’t think that that happens by default.
I think we should frame the problem as “how do we make sure we control the moral framework of future powerful beings”, not as “how do we justify our existence to whatever takes over”. I think it’s entirely possible for us to end up building something that takes over that doesn’t care about our interests, and I simply care about (my) human interests, full stop, with no larger justification.
I might have an expansive view of my interests that includes all sorts of charity to other beings in a way that is easy for other beings to get on board with. But there are just so, so many possible beings that could exist that won’t care about my interests or moral code. Many already exist with us on this planet, such as wild animals and totalitarian governments. So my plea is: don’t think you can argue your way into being treated well! Instead, make sure that any being or institution you create has a permanent interest in treating you well.
A counterpoint may be that human civilization is just the observable of underlying optimization processes (individual and group selection on several levels). Obviously this observable has had variance whenever a new advantage got widely adopted and civilization morphed into the memeplex of whoever had the advantage. So from AI we shouldn’t expect this process to suddenly stop. However, the entities with the next advantage may well be agentic AIs.
I’m excited to find your comment, osten, that reads as a pretty insightful view to me.
Let me restate what I understood your light( and welcome) critique to be: I have put “human civilization” out as an actor which lasted/endured a long time which heuristically suggests it has high resilience and robustness properties and thus deserves respect and holding the control. Here you say it did not endure much as a single structure to consider/test with Lindy, as it got changed significantly and many times, thus we maybe should split it like “feudal civilization”, “democratic civilization”, etc.
The other interpretation I see is that yeah, it is one structure, but ASI will keep the structure, but lead (in) it. I enjoy that argument, but it would not fully work unless AIs get the status of a physical person, but would somewhat work when it can gather human proxies whenever possible.