Requiem for a Transhuman Timeline
The world was fair, the mountains tall,
In Elder Days before the fall
Of mighty kings in Nargothrond
And Gondolin, who now beyond
The Western Seas have passed away:
The world was fair in Durin’s Day.J.R.R. Tolkien
I was never meant to work on AI safety. I was never designed to think about superintelligences and try to steer, influence, or change them. I never particularly enjoyed studying the peculiarities of matrix operations, cracking the assumptions of decision theories, or even coding.
I know, of course, that at the very bottom, bits and atoms are all the same — causal laws and information processing.
And yet, part of me, the most romantic and naive part of me, thinks, metaphorically, that we abandoned cells for computers, and this is our punishment.
I was meant, as I saw it, to bring about the glorious transhuman future, in its classical sense. Genetic engineering, neurodevices, DIY biolabs — going hard on biology, going hard on it with extraordinary effort, hubristically, being, you know, awestruck by “endless forms most beautiful” and motivated by the great cosmic destiny of humanity, pushing the proud frontiersman spirit and all that stuff.
I was meant, in other words, to push the singularity of the biotech type. It was more fun, it wasn’t lethal with high probability, and it wasn’t leaving me and other fellow humans aside. On the contrary, we were going to ride that wave and rise with it.
That feeling — that as technology advances, your agency will only be amplified, that the universe with time will pay more and more attention to your metapreferences — is the one I miss the most.
All of that is now like a memory of a distant, careless childhood.
I check old friends on social media. Longevity folks still work on their longevity thing — a relic of a more civilized age, as if our life expectancy wasn’t measured in single-digit years. They serve as yet another contrastive reminder of the sheer scale of the difference between our current state and our dream.
When did everything go wrong, exactly? Was it 2019, when COVID pushed everyone deeper into social media and we gradually transitioned into pre-singularity mode after the attention paper? Of course, it should be something before that, as the law of earlier failure states.
Was it the rise of the internet and social media, which made it far easier and more rewarding to build virtual worlds than to engineer physical ones and which also destroyed human cognitive skills?
Was it 1971, the year when the real wages decoupled from productivity and the entire trajectory of broad-based material progress bent downward?
Was it lead poisoning, when an entire generation’s cognitive capacity quietly degraded by tetraethyllead in gasoline, producing a civilizational wound whose full consequences we don’t even know?
Was it the totalitarian regimes of the twentieth century, whose atrocities taught humanity a visceral lesson: never try to undertake big projects, because ambition on that scale leads to horror?
Or maybe we, apes from the savannas, were simply never meant to colonize superclusters, and the progress we observed was a random short-lived upward fluctuation, a spark of reason rather than a flame?
A decade ago, in my late teenage years, I was giving lectures on neurotech and CRISPR. Little did I know!
A decade ago, I read HPMOR, knew about the rationalists, and tried to optimize my thinking accordingly, but I didn’t particularly care about the grand program of AI alignment.
Artificial superintelligence, for me back then, was not an urgent practical problem that needed to be solved, and even less so one that needed to be solved by me. It was just another beautiful story — a resident of a separate abstract Realm of Cool Transhumanist Things and Concepts, alongside the abolition of aging, neural interfaces, space colonization, geoengineering, and genetic augmentation.
Of course, knowing everything I knew, having taken step one, I could have taken step two as well, but the state of blissful technophilia is a powerful attractor. Purely intellectually, it may not be that hard to transition from classical transhumanism and traditional rationality into the problem of alignment, but it is hard to do it as a human being, when an aura of positivity forms around technology, when the most interesting and successful people hold these views, when you don’t want to look strange in the eyes of people you respect — top scientists, tech entrepreneurs, and even the AI developers themselves. It was not a warm bath but rather a golden pool.
Also, it seems that back then, it felt to me like the question “which transhumanist things should I work on?” could, or should, be resolved aesthetically. And aesthetically, biotech was closer to my heart.
I was discussing Kurzweil’s forecasts. However, it is clear now, although it wasn’t clear back in the day, that my brain wasn’t perceiving it as a really, actually real thing. Now that my brain does, I totally see the difference.
Of course, even ten years ago it was already too late. Even then, I wasn’t living in the transhuman timeline, but I thought I was, and although this belief was much more a fact about my youthful naivety than about the surrounding reality, the feeling was pleasant.
The first trivial lesson I drew from this: you can be more right than 99.9% of people and still be fatally wrong.
At twenty, I had read Bostrom and Vinge. I was giving lectures about the singularity, and I had enough intellect and nonconformism not to bend under social pressure and to honestly talk about the importance of this topic and the fact that it could all become reality soon. But, great cosmos, I did not understand what I was talking about! I was a child, really. I was almost entirely missing a number of critical points — partly from an insufficiently serious approach to analysis, partly from ignorance, and partly because certain things were simply impossible to grasp at the level of normal human intelligence. And so, for all my openness to the ideas of radical technological progress, a full-blown singularity with superintelligence still seemed somewhat in the realm of science fiction. Apparently, for every transhumanist there is a rate of change which is too much.
However, there were two even more significant lessons.
The first one is about how the history of technology works.
Planes are not modified birds, just as cars are not improved horses. It was silly to expect the opposite with intelligence. And yet, there was hope, and the hope was not totally meaningless. It was conjectured that intelligence would be something much more complex to design from scratch than physical labor devices, and thus we would need to rely on what was already created by evolution, working on top of it. This doesn’t sound insane even now. It’s just that reality had the right to choose differently, and did so.
And the second lesson is about how real defeats work.
Dinosaurs lost to other animals, not to, say, bacteria. Apes lost to other primates, not to reptiles or birds. Native Americans lost to other humans, not to local predators. European empires lost to other European empires, not to the peoples they colonized. And transhumanists lost to other progressivists — that is, to AI accelerationists — not to traditionalists or conservatives.
All the complaints about conservatives who fear GMOs and cyber-modifications never made sense from the very beginning. From the very beginning, they were never capable of stopping anything. The most dangerous enemies are found among the most powerful agents, not the most ideologically distant ones. Each successive battle is fought among the previous round’s winners, and it never replays the prior distribution of sides.
In retrospect, this seems obvious, but how non-obvious it was just five years ago! Well, at least for me.
The evening blooms with spring scents — this always makes me feel younger. Yet another reason to recall 2015.
I look at the stars.
We were meant to colonize them. The ghosts of our immeasurable possible great-grandchildren look from there at me. They are still possible, and yet they look not with hope or approval, but with fear and contempt.
Even now, it is possible, or rather it is not prohibited by the laws of physics, that we turn back toward the future. We could repurpose talent, compute, and funding to solve biology, and there will be hope, and pride of human spirit, and the future will look existing once more.
I want to go home.
I have said for some time that the problem is much deeper. The human race in general was never on board with transhumanism. The idea of radical life extension has been around for millennia, it has been scientifically plausible for decades if not centuries, but it has always been a marginal concern. There was never a society which organized to make the cure of ageing a major priority.
There has been an incremental improvement over time, both in medical capability (thanks to the progress of conventional medicine) and in openness to life extension (partly thanks to science fiction, perhaps). But it’s almost as if humanity backed its way into this improved situation, under the pressure of immediate concerns (e.g. specific illnesses, individual grief), without ever having consciously adopted a futurist vision like those you describe. At the level of individual psychology, and even more at the level of mass psychology, most people are completely resigned to living out the historically normal human life cycle.
Nonetheless, we actually have a form of transhumanism in power now, but it’s this AI-centric version, half of whose protagonists are in denial about what they are creating. Many of the others think they can skip biology entirely, and just go straight to mind uploading or creation of benevolent AI, or even believe they are in a simulation. This points to a divide within transhumanism itself (and adjacent movements). But socially and politically, I think denial of the full implications of AI, is the main enabling factor. There is no politician who runs for office on the platform of creating non-biological superhuman intelligence. It’s only the tech CEOs who talk directly about anything like that.
I have been contemplating a post about different forms of transhumanism which would go into more detail about all this.
From a very broad perspective, not even focused on Earth, but just on the possible destinies of intelligent life in the cosmos once technology comes into play… It would not be surprising to know that in the encounter with technology, intelligent species often blow up their world or inadvertently replace themselves with a successor species, and only sometimes manage to preserve their own existence and imperatives. It’s just that we also get to live through one instance of such an encounter in person.
Many premodern societies actually spent a lot of time and effort on pursuing anti-aging technology. Perhaps not to the extent of organizing their whole society around it but their efforts were not trivial in scale. In medieval and early modern Europe, it was a primary goal of alchemy alongside transmuting base metals into gold. For Christians, the Bible already hinted at the possibility of radical life extension (biblical figures such as Methuselah were said to have lived for hundreds of years) and prominent intellectuals like Roger Bacon believed that human lifespans had been artificially shortened. Searching for a means of reversing this “corruption” to extend human lifespans was a mainstream, even cliche intellectual pursuit for centuries. It only became fringe with the rise of modernity.
I would describe what we have already done as radical life extension. Perhaps we have a difference in definition. From this link:
The model that is most convincing of why we didn’t orient ourselves around something like a cult of increasing life expectancy is that we went down the path of least resistance of technological progress and economic growth.
I claim this was never a realistic goal. The set of cultures in which we have the cultural norms/tools to create technology and large economic growth (that is required for transhumanism) AND which prioritize transhumanism above everything else are not very numerous.
In some sense, you can see the afterlife promise of many religions as a form of transhumanism, and billions of people are on board with that. Yet basically, none of these religions have contributed to actually achieving something like transhumanism.
One of my posts before this one was exactly on that: https://www.lesswrong.com/posts/RrL7xqdPycGNHQkXR/the-lethal-reality-hypothesis
Also, it looks indeed increasingly likely to me that we can go directly to mind uploading, but still probably not the main bet I would do if AGI race got cancelled.
I noticed this part:
And my first thought was: Hasn’t this been obvious since ~2022 and isn’t “d/acc” the obvious thing to work on, given the moral nihilism and not-practically-stoppable risk-externalizing cowboy bullshit happening among the “e/acc” types?
This is why I’m focused on Satisficing. This is why I’m focused on Global Governance. This is why I’m focused on building up local healthy practical affinity groups. Without something like Kant, an officer in a survival team will not discharge team duties very well. Hence beating the drum of being dutifully decent to the strongest and fastest growing possible teammates around.
Good survival teams MIGHT survive. We probably won’t. But there’s not many other options than to find the people who want to go as fast as fucking possible towards “come with me if you want to live” projects.
Elon Musk invested in Tesla not because it was an obviously good idea at the time, but simply because any timeline where an electric car company didn’t spring into existence was going to collapse into ruinous Global Warming. IF in the Global Warming timeline we die… THEN just act like the the Global Warming timeline will somehow be avoided, and then position yourself to be happy in that future. (The other futures are doomed anyway, so don’t bother optimizing for them. Your energies will be wasted no matter what you do, if nanites kill you 18 months from now, so act as if nanites will definitely not kill you in the next 18 months.)
“D/acc” is playing for something that can absorb people’s actual energies, in the event that nothing else (that they can’t have done something about anyway) kills them even faster.
To be clear, I agree with you, but I suspect that to a certain kind of mind pursuing d/acc and satisficing and governance puts you in the realm of the luddites and the social conservatives. “There is no good or evil, there is only power/optimisation and those too weak to take it.”
I happen to believe there is another way, but Moloch provides for his own.
Lol! I don’t care what “certain kinds of minds” think of “who I am socially put with” if being put that way by those minds doesn’t conduce to better chances of SURVIVING a plausibly imminent global chaos and gigadeath and GETTING to a Win Condition somehow.
If Luddism is correct, I want to believe in Luddism.
If Luddism is not correct, then I don’t want to believe in Luddism.
(I’m not currently doing a lot of Luddism personally? My vibe lately is roughly heading for Agentic Coding and 3D printing and bottom-up affinity groups using BFT coordination protocols to flock efficiently. More “solar punk” than “Luddism”? But I’d be happy to switch if there are actually good reasons for that!)
Say more about your better way! ❤
Is that so bad? The rational use of irrational symbols has proven highly effective in the past. Whatever it takes to survive, is worth considering.
I went into medicine because medicine is applied transhumanism.
Most of my colleagues would object strenuously to this characterization, and I think they are wrong. They spend their days fighting disease, clawing back life-years from the void, chemically and surgically overriding the factory defaults of the human body—and yet the word “transhumanism” would make many of them recoil as though I had said something vaguely embarrassing at a dinner party. There is a certain type of person who will happily do the thing while refusing, on aesthetic grounds, to endorse the philosophy behind the thing. I do not begrudge them this, and they outnumber me.
(Not everyone is a high decoupler, or an adherents of radical internal consistency and coherence.)
I was always one of the people who endorsed the philosophy behind the thing. I grew up dreaming about genetic engineering, cybernetic augmentation, and the eventual abolition of aging. I had the full card-carrying package. The only reason I was not particularly anxious about AI timelines is that I did not have AI timelines—or rather, my implicit AI timelines were “late 21st century, someone else’s problem,” which is not really a timeline so much as a polite way of not thinking about it. I did not have such urgent need to think about, even if I started well before it was hip.
Then, somewhat ahead of schedule, the timeline arrived. I think 2022 was when I went from concern to “oh fuck, we’re really trying to make AGI, aren’t we?”
Here is where I am supposed to be distressed. And I am, a little, but perhaps not about the things you would expect. The dreams of genetic engineering and cybernetic augmentation were always instrumental—means toward the end of not dying, of having more cognitive capacity than a three-pound organ optimized for Pleistocene conditions can provide, of becoming, in some meaningful sense, more. If AGI and then ASI arrive and are aligned (I am aware this is a large “if,” possibly the largest “if” in the history of human sentences), they get me to the same destination considerably faster than the biological route would have. I find I am not especially mourning the scenic path.
I am not wedded to the idea of being human, or becoming just a little bit better than human, or just a lot better than human. I want to become the kind of entity that takes up most of a Matrioshka Brain. You can’t make one out of meat.
So I am willing to shed the flesh as soon as shedding it becomes feasible, which puts me in a minority even among people who would self-identify as transhumanists. Most people, it turns out, want to be enhanced humans rather than post-humans. They want to keep the architecture and upgrade the components. I understand the appeal. I just do not share it strongly enough to treat it as a constraint. The part of me I care most about preserving and enhancing is computational, it does not care about biology as a privileged substrate.
What does cause me distress is the perceived risk of our current path killing me, and maybe everone else. If you want a p(doom), it hovers around 20% these days, down from a peak of 30%. Not great, not terrible.
We could all die. Failure and death is always an option. I think about it with the particular emotional register of someone who has accepted a thing without having made peace with it. You can accept the actuarial tables without being happy about them.
I can’t do much about it, but I refuse to learn more helplessness than is strictly necessary. The thing about feeling like an actor in a history you cannot change: it does not actually follow that you should stop acting. Nothing I say or do will determine the outcome of the next decade in any individually legible way. This is also true of voting, and of keeping in shape, and of most of the things humans do that we nonetheless consider worthwhile. Super-rationality can be distinct from individual rationality. I will try anyway.
We might have become immortal and made Dyson Swarms anyway, with only minimally augmented human brains at our disposal. We are a capable species. It might have taken much longer. Oh well, as long as AGI and ASI are aligned, I’m happy. I just note that is a very big “if”.
I don’t think that in an alternative timeline it was too realistic for us to “rise with the wave” of bio transhumanism. Especially without the acceleration from powerful AI germline engineering would have been the most likely path to super/transhumans—leaving us behind just like AI. Still a better timeline I think.
On the one hand, current acceleration of biology via AI indeed points for me towards the direction that biology is way harder to be cracked that I initially expected. On the other hand, there are still many things to try, both in terms of science and institutions, which haven’t been tried. So I wouldn’t be surprised if things like curing aging and adult human intelligence were achievable by more competent 21-century civilizations in a series of generational moonshots.
I’m still working on the biological path! (Making eggs from stem cells, embryo selection, etc.) I wish more talented people went this route instead of working on AI.
I’m pretty sure that’s what we did do? We just chose to copy at the abstraction level of neural nets and predictive processing instead of at the physical substrate level of neurons and proteins. We defintely didn’t take the path of actually understanding what we were doing in all its complexity.
Otherwise, excellent post.
Thanks!
I mean, one could definitely imagine/develop AIs which are further from human neurobiology, but current AIs are quite far away from it, in my view.
I think this is a very important point. The default expectation was (1) “intelligence it too complex to design from scratch so we likely need to rely on something” and (2) “so we will rely on human neurobiology”. While (1) has proven to be true, (2) not so much, and (2) was not the only possible conclusion from (1).
That’s pretty much my position, yes, with the small caveat that not relying on neurobiology did not mean excluding our neuroscience-inspired abstract models of cognition.
when i was a little girl[1] and learned about crispr in biology class it immediately became my dream to get pregnant and crispr my own fetus, my body my choice they cant stop me. i was weirdly serious about this until i hit college and realized doing any bio related research would involve undue amounts of pipetting (which i hate) and not enough math (which i love).
you’re right—AI has overtaken all of the discussion and we have forgotten about how cool genetic modification can be. i am sure biologists are still plugging away at interesting genetic modification in the background. we should be putting more funding and research towards topics like this currently outside of the research vogue. i so totally agree. more transhumanism. on this note, i argue that we are already cyborgs[2] and have already succeeded in transing our humanity[3]
but to me, transhumanism would need some sort of overaching goal. like i support doing transhumanism for sillies, but i’m just not a silly guy personally. why are we transing our humanity like that? whats the end goal here? i am so serious. i have seen both life and beauty posited as goals, neither of which compel me much. transhumanism for space colonization i like though. that one compels me.
i truly do not think humanity will make it in space. or at least, its not the most feasible. i feel it is a waste of resources and humans would have to evolve so much that they are essentially no longer humans to exist in space. BUT i think we should build robots that can colonize space. in many ways we already have.
i always get a little confused on the end goals of human space exploration. here are some i see:
because it is a difficult thing to accomplish, and to quote jfk, “we will go to the moon not because it is easy but because it is hard” and hard things are fun.
resource extraction (this, i feel, makes the most sense, as i am toxically economically pessimistic.) robots are the answer for this i feel.
escaping earth because it is burning up. i feel surviving on a polluted, climate changed earth will always be easier than surviving in space. supervolcano, etc, i feel would be easier than space. (i can elaborate upon request.)
like uh legacy. some sort of “we were here” and continuing the legacy of the human race type thing. i feel robots are the answer to this one as well.
maybe im the weird one but i dont feel the human body/form is that sacred. i think we should be focusing on robots that can do the things that we cant. i know i am a freak for saying this but i view robot successors to the human race as an almost parent-child relationship. as oingo bingo once said, “nooone lives forever!”[4] and thats why you leave someone behind to continue your legacy. this is normal and i am quite comfortable with this.
anyhoo. most living organisms are suited to earth-like conditions. i feel replicating earth-like conditions in space consistently is much harder than simple designing robots built to survive in space. or even designing organisms, that could be cool, i am just more into robots because its more related to my area of research. also, i feel designing umwelts for space capable of data collection, data synthesis, and construction methods would be easier using robot type tools and not designing a high level organism from scratch. for numerous reasons.
nice essay tho ur prose is very emotional in a moving way 👍 u shld write poetry king
age 14, which, actually not too long ago!
the phone is an extension of the self, taking my computer away from me is like cutting of part of my brain, technology is so involved in our lives that we become disabled without it; conversely through phones and such our capabilities are enhanced. bionic humans are you and me. this delves into philosophy of the self and how we define the self which is its own topic. unrelated. i shant delve into that one.
what defines a human anyway?
i read like 2⁄3 of HPMOR. the section where dumbledore was like “harry everything dies u must make peace with it” and harry was like “no i think we should live forever because: life is good. you’re crazy for not thinking life is good.” and then the narrative structure of the passage was like “BOOM! DUMBLEDORE JUST GOT DESTROYED!” was very funny in how i felt it was poorly argued. also riddled with fallacies. maybe it was better argued at a later point, but at the time of reading it i was obsessed with how the author was just claiming life is good therefore we should live forever. not to reference clavicular [5]but its like saying “beauty is good” and just running with that. like u havent convinced me that life or beauty is good at all but i respect the confidence. lol. my take is that reproduction is part of life and instead of living forever we should be focused on effectively reproducing. (whatever that may entail.)
also not to wallacianly footnote my footnote, transhumanism is SO in full swing see: clavicular see: people who research robotic limbs. i just think the phone as a cyborgian component of the human experience is more interesting as it is more commonplace.
i apologize for mentioning clavicular but in my defense eliezer yudokowsky and him are the same to me. i enjoy them both. i want them to do whatever they think is right. eliezer yudokowsky twitch stream when, clavicular harry potter fanfiction when?
And I feel admiration for people relentlessly looking for an alien utility maximizer in a thing that is bound to approximate our messy decision theories.
Tangentially, this song is done very well in the Return to Moria game; I recommend looking up the OST on your music provider of choice.
I like the Clamavi De Profundis version.