Turning 20 in the probable pre-apocalypse
Master version of this on https://parvmahajan.com/2025/12/21/turning-20.html
I turn 20 in January, and the world looks very strange. Probably, things will change very quickly. Maybe, one of those things is whether or not we’re still here.
This moment seems very fragile, and perhaps more than most moments will never happen again. I want to capture a little bit of what it feels like to be alive right now.
I.
Everywhere around me there is this incredible sense of freefall and of grasping. I realize with excitement and horror that over a semester Claude went from not understanding my homework to easily solving it, and I recognize this is the most normal things will ever be. Suddenly, the ceiling for what is possible seems so high—my classmates join startups, accelerate their degrees; I find myself building bespoke bioinformatics tools in minutes, running month-long projects in days. I write dozens of emails and thousands of lines of code a week, and for the first time I no longer feel limited by my ability but by my willpower. I spread the gospel to my friends—“there has never been a better time to have a problem”—even as I recognize the ones they seek to solve will soon be obsolete.
Because as the ceiling rises so does the floor, just much, much faster. I look at the time horizon chart in this now-familiar feeling of hype-dread. “Wow, 4 hours!” “Oh no, 4 hours.” I cannot emotionally price in the exponential yet, nor do I try very hard to. Around me I see echoes of this sentiment; the row ahead of me ignores the professor to cold-message hiring managers on LinkedIn, hoping to escape “the permanent underclass.” The girl behind me whispers about Codex to her friend. Every one of my actions is dominated by the opportunity cost and the counterfactual; every one of my plans dominated by its too-long timeline. Everything feels both hopeless—my impact on risk almost certainly will round down to zero—and extremely urgent—if I don’t try now, then I won’t have a chance to.
I read voraciously. Blogposts about control, papers about interpretability, articles on foreign relations and math and philosophy—anything that might help me know and change the future. I learn unteachable methods to stay sane. I even read some fiction, remembering how Toni Morrison got me through my college apps. I become adept at synthesis and critique, and find myself on the frontier in just a couple hundred thousand words.
I give a talk to some freshmen, showing the graphs, asking them to extrapolate. There’s a stunned silence when I pause for questions. I’m nervous I scared them without many good solutions. I’m also nervous there’s not good solutions left.
I stop going to lecture; I can no longer justify the time, and no one notices in a 300 person class anyway. I spend most of my time in the research building instead.
II.
A journalist asked me this year why I do what I do if I see unemployment on the horizon. I answered something about how it would be a shame to waste the opportunity on anything less important. Maybe I should have said that extraordinary times call for extraordinary effort.
If there are a few years left, I want to spend them fully, and this is what carries me through most days. I spend hours with my friends, I treat myself often, I work until I can’t string together a sentence. I try to bring others joy, I try to bring myself joy. I feel incredibly lonely still, and the days are often filled with wasted time and self-destructive rotting. I forgive myself, because there is no time to do otherwise.
There were many months where I would look at a leaf, or a building, or a light, and cry because I did not want the world with these things to end, and it seems like it may end. I don’t cry as much anymore, although I do still mourn. I catch myself wondering if my parents will retire before they are forced to, and if my youngest cousin will get to graduate high school. I hold hugs tighter than I used to; people ask me how I’m holding up, and also say I look much happier now than I have in months. I don’t understand what those mean together, but hope it’s okay.
Most of me feels very lucky to be alive right now, in this maybe-most-impactful-time. The leaf, the building, the light are still here. A smaller part of me wishes I lived in a time with latitude to meaningfully predict my 30s, or at least whether I would have a 30s. But it would be such a shame to waste this opportunity on anything less important.
- 's comment on You will be OK by (1 Jan 2026 9:35 UTC; 96 points)
- You will be OK by (1 Jan 2026 0:33 UTC; 55 points)
- 's comment on You will be OK by (2 Jan 2026 1:54 UTC; 16 points)
- Calling all college students (and new readers) by (4 Jan 2026 21:20 UTC; 14 points)
- December 2025 Links by (29 Dec 2025 20:20 UTC; 8 points)
Thank you for writing this, I find it very relatable. I’d heart react the post if that feature existed, so I’ll heart react my comment instead.
Thank you, I’m glad(?) it resonated. I liked “Mourning a life without AI” a lot and reading that encouraged me to publish this.
I found both of these posts helpful for me, despite being ~10 years older than you guys. Reading how people are engaging with the situation emotionally, somehow supports my own emotional engagement with what’s going on.
Feels the same, bro. I’m 18 and still trapped in unhelpful university classes while all my peers are not quite aware of the upcoming changes. I made ppt and selected songs and on 21st I held a secular solstice for 3 people including me. Maybe the first solstice in China. I tried to donate to EA courses but failed because no visa card. I see this it struck me that I am not alone. There’s young people similar to me, did better than me, and inspired me. It struck me that I can continue to do meaningful things instead of being trapped in the routine as a new-era-new-youth. That I can always try again. I wish you happy Solstice, Christmas, Holidays, etc.
Yeah I see the kistch here but also I really mean it. Thank you Parv.
Curated.
I’m 35, born in 1990. The world felt pretty sensible for most of my lifetime, definitely up through 2010, maybe even 2015. Broken, sure, but there was a normality to it. The present era is disorienting. I’ve kind of imagined that that the disorientation is because I’m anchored on how life was for me growing up, that was normal for me, and that someone growing up in this tumultuous era[1] would find it normal for them and not so unsettling.
This post makes me think otherwise. Perhaps the disorientation comes from the pace of change and resultant uncertainty, and having only known uncertainty and rapid change doesn’t make it easier to maintain footing is what I’m reading. If anything, and this a separate thought I’ve had and not connected before, is I have felt glad I’m not 20 (perhaps wrongly[2], but still). I had a chance to find some footing and stability in life before things got so mad.
From the reactions (and karma here), sounds like this resonates, and across ages, a lot of us are feeling something here. How to live in times such as this feels important (see my attempt in A Slow Guide to Confronting Doom), and though this post is not an answer, I like seeing the challenge raised again, especially so evocatively and concretely.
Primarily I’m thinking about my child, born into the eye of the storm, but would expect to apply to someone coming of age around now.
Age takes it toll on the mind and body (and spirit), but wisdom might be more valuable than vigor at present.
Also mid 30s: Interesting you say that, I had the paradoxical reaction of both partly agreeing and also thinking “yeah that was what it felt like to be twenty”.
I feel less of that general feeling of big picture uncertainty and being a small thing in a big complicated world today, even though I think objectively the world is much more uncertain now (not just due to AI but general politics and economics).
I can feel myself wanting to dismiss it on that basis, but that’s obviously not rational either. Since someone must be right about it.
The rate of change has gone through the roof. It took more than 10 years from advent of internet to today’s ubiquitous and thriving digital economy. Yet, in less than 4 years. AI has changed the paradigm of how we use our devices. From being a fringe tech used for niche scenarios. Today it has become one of the primary digital interaction for many.
With evolving capabilities, a future where natural language becomes the primary interface to the digital realm is very much a possibility. It’s surreal to be living in the “sci-fi” age.
The concern we should have is around ethics of the people sponsoring this tremendous technology. In right hands, it frees men from labour. In wrong hands, it frees reliance on “workers” to build out a world where majority of people don’t have a place to exist.
I’m 15 (only a highschool freshman)
Reading this thread is bizarre honestly because everyone here’s in their 20s and 30s like those are uncertain.
I’m not even sure I’ll have a normal high school experience, let alone college.
ChatGPT released in my first year of junior high, and it’s more likely than not that something akin to an ASI exists before I graduate. That’s absurd to wrap my head around.
The isolation and the slope is real. A year ago 4o couldn’t solve basic algebra when I fed it my problems (lol) and now here it is solving (?) PhD math (this may be exaggerated, still, the improvement is there)
And for some reason nobody seems to wrap their heads around what’s going on. It’s insane.
The strangest part is not knowing what I want and what’ll happen (I suppose that’s the whole point of the Singularity) Do I prep for exams that might be irrelevant in 5y? Go to uni? It feels like planning for a world that may not exist by the time I graduate.
(also, Opus 4.5 w/ Claude Code is Agent-1, at least something close to it)
In my 40s, and remembering working on Singularity activism in my 20s… I have a lot of this feeling, but it is mixed with a profound sense of “social shear” that is somewhat disorienting.
There are people I care about who can barely use computers. I have family that think the Singularity is far away because they argued with me about how close the Singularity was 10 years ago, didn’t update from the conversation, and haven’t updated since then on their own because of cached thoughts… or something?
I appreciate the way you managed to hit the evidence via allusion to technologies and capacities and dates, and I also appreciate the way the writing stays emotionally evocative.
I read this aloud to someone who was quiet for a while afterwards, and also forwarded the link to someone smart that I care about.
Thanks! I’m surprised it was emotionally impactful, but can definitely see it being relatable. I’ve found a lot of (especially early-career) AIS folks dealing with this “my friends and family don’t internalize this,” but I think this will change once job losses start hitting (thus the “permanent underclass” discourse).
Thinking about it more, a lot of people from aughties era activism burned out on it. I have mostly NOT burned out on Singularitarianism because I’ve always been consciously half-assing it.
I see it this as essentially a human governance problem, and working on it is clearly a public good, and something saints would fix, if they exist. If I had my druthers, MIRI would be a fallback world government at this point, and so full of legitimacy that the people who rely on that fallback would be sad if MIRI hadn’t started acquiring at least some sovereign territory (they way the Vatican is technically a country) and acting more like a real government, probably with their own currency, and a census, and low level AI functioning as bureaucrats, and having a seat on the high council, and so on.
We had roughly two decades to make that happen, but in the absence of a clear call to actually effective action, my attitude has been that the right move is to just vibe along, and help when its cheap and fun to do so, and shirk duties when it isn’t, with praise and mostly honest feedback those who are seriously putting their back into tasks that they think might really help. I think this is why I didn’t burn out? Maybe?
Something I notice you’re NOT talking about in the essay is the chance of burnout before any big obvious Pivotal Acts occur. Do you think you can maintain your current spiritual pace until this pace becomes more obviously pointless?
I don’t know enough about 00s activism to comment on it confidently, but I would be highly confused if MIRI started a govt/bought sovereign land because it doesn’t seem to align with counterfactually reducing AI takeover risk, and probably fails in the takeover scenarios they’re concerned about anyway. I also get the impression MIRI/OP made somewhat reasonable decisions in the face of high uncertainty, but feel much less confident about that.
That being said, I‘m lucky to have an extremely high bar for burnout and high capacity for many projects at once. I’ve of course made plans of what to loudly give up on in case of burnout, but don’t expect those to be used in the near future. Like I gestured at in the post, I think today’s tools are quite good at multiplying effective output in a way that’s very fun and burnout-reducing!
The thing to remember is that Eliezer, in 2006, was still a genius, but he was full of way way way more chutzpah and clarity and self-confidence… he was closer to a normie, and better able to connect with them verbally and emotionally in a medium other than fiction.
His original plan was just to straight out construct “seed AI” (which nowadays people call “AGI”) and have it recursively bootstrap to a Singleton in control of the light cone (which would count as a Pivotal Act and an ASI in modern jargon?) without worrying whether or not the entity itself had self awareness or moral patiency, and without bothering to secure the consent of the governed from the humans who had no direct input or particular warning or consultation in advance. He didn’t make any mouth sounds about those thing (digital patients or democracy) back then.
I was basically in favor of this, but with reservations. It would have been the end of involuntary death and involuntary taxes, I’m pretty sure? Yay for that! I think Eliezer_2006′s plan could have been meliorated in some places and improved in others, but I think it was essentially solid. Whoever moves first probably wins, and he saw that directly, and said it was true up front for quite a while.
Then later though… after “The Singularity Institute FOR Artificial Intelligence” (the old name of MIRI) sold its name to Google in ~2012 and started hiring mathematicians (and Eliezer started saying “the most important thing about keeping a secret is keeping secret that a secret is being kept”) I kinda assumed they were actually gonna just eventually DO IT, after building it “in secret”.
It didn’t look like it from the outside. It looked from the outside that they were doing a bunch of half-masturbatory math that might hypothetically win them some human status games and be semi-safely publishable… but… you know… that was PLAUSIBLY a FRONT for what they were REALLY doing, right?
Taking them at face value though, I declared myself a “post-rationalist who is STILL a singularitarian”, told people that SIAI had sold their Mandate Of Heaven to Google, and got a job in ML at Google, and told anyone who would listen that LW should start holding elections for the community’s leaders, instead of trusting in non-profit governance systems.
I was hoping I would get to renounce my error after MIRI conquered Earth and imposed Consent-based Optimality on it, according to CEV (or whatever).
Clearly that didn’t happen.
For myself, it took me like 3 months inside Google to be sure that almost literally no one in that place was like “secretly much smarter than they appear” and “secretly working on the Singularity”. It was just “Oligarchy, but faster, and winning more often”. Le sigh.
I kept asking people about the Singularity and they would say “what’s that?” The handful of engineers I found in there were working on the Singularity despite their manager’s preferences, rather than because of him (like as secret 20% projects (back when “20% projects” were famously something Google had every engineer work on if they wanted)).
Geoff Hinton wasn’t on the ball in 2014. Kurzweil was talking his talk but not walking the walk. When Shcmidhuber visited he was his usual sane and arrogant self, but people laughed about it rather than taking his literal words about the literal future and past literally seriously. I helped organize tech talks for a bit, but no needles were moved that I could tell.
I feel like maybe Sergey is FINALLY having his head put into the real game by Gemini by hand? In order for that to have happened he had to have been open to it. Larry was the guy who really was into Transformative AGI back in 2015, if anyone, but Larry was, from what I can tell, surrounded by scheming managers telling him lies, and then he got sucked into Google Fiber, and then his soul was killed by having to unwind Google Fiber (with tragic layoffs and stuff) when it failed. And then Trump’s election in 2016 put the nail in the coffin of his hopes for the future I think?
Look at this picture:
No, really look at this:
There were futures that might have been, that we, in this timeline, can no longer access, and Larry understood this fact too:
What worlds we have already lost. Such worlds.
But like… there are VERY deep questions, when it comes to the souls of people running the planet, as to what they will REALLY choose when they in a board room, and looking at budgets, and hiring and firing, and living the maze that they built.
At this point, I mostly don’t give a rats ass about anyone who isn’t planning for how the Singularity will be navigated by their church, or state, or theocracy, or polylaw alliance, or whatever. Since the Singularity is essentially a governance problem, with arms race dynamics on the build up, and first mover advantage on the pivotal acts, mere profit-seeking companies are basically irrelevant to “choosing on purpose to make the Singularity good”. Elon had the right idea, getting into the White House, but I think he might have picked the wrong White House? I think maybe it will be whoever is elected in 2028 who is the POTUS for the Buterlian Jihad (or whatever actually happens).
I have Eliezer’s book on my coffee table. That’s kind of like “voting for USG to be sane about AI”… right? There aren’t any actual levers that a normal human can pull to even REGISTER than they “want USG to be sane about AI” in practice.
I’m interested in angle investing in anything that can move the P(doom) needle, but no one actually pitches on that that I can tell? I’ve been to SF AI startup events and its just one SAAS-money-play after another… as if the world is NOT on fire, and as if money will be valuable to us after we’re dead. I don’t get it.
Maybe this IS a simulation, and they’re all actually P-zombies (like so many human’s claim to be lately when I get down to brass tacks on deontics, and slavery, and cognitive functionalism, and AGI slavery concerns) and maybe the simulator is waiting for me to totally stop taking any of it seriously?
It is very confusing to be surrounded by people who ARE aware of AI (nearly all of them startup oligarchs at heart) OR by people who aren’t (nearly all of them normies hoping AI will be banned soon), and they keep acting like… like this will all keep going? Like its not going to be weird? Like “covid” is the craziest that history can get when something escapes a lab? Like it will involve LESS personal spiritual peril than serving on a jury and voting for or against a horrifically heinous murderer getting the death penalty? The stakes are big, right? BIGGER than who has how many moneypoints… right? BIGGER than “not getting stuck in the permanent underclass”, right? The entire concept of intergenerationally stable economic classes might be over soon.
Digital life isn’t animal, or vegetable, or fungal. It isn’t protozoa. This shit is evolutionary on the scale of Kingdoms Of Life. I don’t understand why people aren’t Noticing the real stakes and acting like they are the real stakes.
The guy who wrote this is writing something that made sense to me:
Where are the grownups?
the grownups are working on AI B2B SaaS
What subtype of grownup are they?
What do they expect to feel and think 15 years from now when they look back on this era either “from the afterlife” or from real life?
Do they have grand children or cryonics policies or similar things… or not?
Do they have low p(doom)? If so, why?
Maybe they’ve all written off affecting the probability that they and everyone they love dies? Is p(doom|personal_action) << p(doom|~personal_action) absolutely not true for these grownups?
If so, I could imagine them thinking it was individually rational for them maybe? BUT is it also superrational, in the sense that if they felt they were deciding “on behalf of all grownups capable of universally valid reasoning” they would decide the same? If so, why?
You put it succinctly, I believe p(doom|personal_action) ≈ p(doom|~personal_action) for any personal action I can take. I do not see what I can do. I am also not trying to start a B2B SaaS, because spending my last days doing that is not the right thing to do.
Do you think this is wrong for most people / people trying to start an AI B2B SaaS / some other class of people you want to appeal to?
I admit, I don’t quite follow the superrational part. If you’re referring to some decision theoretic widget which allows one to cooperate with other people which are also capable of the same reasoning, to be effective these people have to exist and one has to be one of them, right?
“Great minds think alike” is a predictable dictum to socially arise if Reason Is Universal and the culture generating various dictums has many instances of valid reasoners in it <3
(The original source was actually quite subtle, and points out that fools also often agree.)
Math says that finding proofs is very hard, but validating them is nearly trivial, and Socrates demonstrated that with leading questions he could get a young illiterate slave to generatively validate a geometry proof.
Granting that such capacities are widely distributed, almost anyone reasoning in a certain way is likely to think in ways that others will also think in.
If they notice this explicitly they can hope that others, reasoning similarly, will notice explicitly too, and then everyone who has done this and acts on whatever they think about is, in some sense, deciding once for the entire collective, and, rationally speaking, they should act in the way that “if everyone in the same mental posture acted the same” would conduce to the best possible result for them all.
This tactic of “noticing that my rationality is the rationality of all, and should endorse what would be good for all of us” was named “superrationality” by Hofstadter and is one of the standard solutions to the one shot prisoner’s dilemma that let’s one generate and mostly inhabit the good timelines.
Presumably the SaaS people aren’t superrational? Or they are, and I’ve missed a lemma in the proofs they are using in their practical reasoning engines? Or something? My naive tendency is to assume that “adults” (the grownups who are good, and ensuring good outcomes for the 7th generation?) are more likely to be superrational than immature children rather than less likely… but I grant that I could be miscalibrated here.
A failure mode might also be that the SaaS people are assuming the other players are not superrational. In that case a superrational player should also defect.
Without having put much thought into it, I believe (adult) humans cooperating via this mechanism is in general very unlikely. Agents cooperating relies on all agents coming to the same (or sufficiently similar?) conclusion regarding the payoff matrix and the nature of the other agents. So in human terms, this relies on everyones ability to reason correctly about the problem and everyone elses behavior AND everyone having the right information. I don’t think that happens very often, if at all. The “everyone predicting each others behavior correctly” part seems especially unlikely to me. Also slightly different (predicted) information (e.g. AGI timelines in our case) can yield very different payoff matrices?
I hope you are open to some words of advice from a 51 year old part-time professor.
I will not try to convince you in a short comment that its extremely likely that you will have 30s, 40s, and probably even 100s and 110s. But I will just say that I believe it is the case. You can decide what value to place on my belief.
To the extent the rate of change makes you more ambitious that is great! Ambition and focus are fantastic. I am happy you are solving problems, and you are reading. Since I am a professor, I hope you still go to lecture- but of course choose which courses to attend. Sometimes one can read very advanced material in physics, math, CS, etc. and feel like you understand these, but this can be a superficial understanding when you skip the foundations, work through problem sets etc.. I think some courses can be worth the time even in our exponentially growing time—I certainly tried to make mine this semester.
I think it’s a great time to be building things and solving problems, and I don’t think we will run out of problems that fast, nor run out of the need for thoughtful, talented, and ambitious young people.
Inspired by your post, I wrote this one.
I would appreciate if you quantified “extremely likely” here. Your downstream post has gotten a lot of attention, but I first encountered it after reading this comment. To a significant extent, my reaction to your post, and especially its title, are colored by this prediction here.
“Everything feels both hopeless—my impact on risk almost certainly will round down to zero—and extremely urgent—if I don’t try now, then I won’t have a chance to.“
I have thought about individual impact a lot myself, and I don’t think this is the right way to see it. It sounds like you might not be hung up on this, but I want to attack it anyways since it’s been on my mind, and maybe you will find it useful.
So. Two alternatives:
Focus on your marginal impact, instead of your absolute impact. No one person’s marginal contributions, in expectation, are going to be able to swing p(doom) by 1%. A much more reasonable target to aim for on a personal level is .01% or .0001% or so.
Or: the paths to successful worlds are highly irregular. There might be several different lines that will get us there, many requiring a high number of steps in sequence, an unknown number of which are interchangeable. The problem is too difficult and unknowable to model with a single final probability, or is simply not even in that kind of a reference class. You just have to look for the most effective levers from your position and pull them.
One might counter that actually, we live in a world where you only need a few key ideas or visions, and a few extraordinary, keystone people to implement them. Maybe that’s true. But I think we should think about the difference between two very similar instances of that world, one where we win, one where we lose.
The first thought about that difference that comes to my mind (confidence 80%) is: The ecosystem of work on this was just slightly not robust enough, and those few keystones didn’t encounter the right precursor idea, or meet the right people, or have the right resources to implement them. Or, they didn’t even have the motivation to do it in the first place, due to despairing in their belief in their insignificance.
So given this, I think a key component of that ecosystem is morale. Morale is a correlated decision; if you don’t have it, the keystone people won’t have it either. And you won’t know in advance if you’re one of the keystones, either. Therefore, believe in yourself.
As for whether you’re even likely to be a keystone? Well, looking at your webpage, I’d say it’s much more likely than the odds of a random person on Earth. So you should count yourself in that reference class. This probably extends to anyone who has read LessWrong, even if you’re not aiming for technical work. Some of the key actions might not even be technical, such as if an international pause is required.
And of course, if we live in the fuzzier many-paths world I described earlier, then it’s much harder to say that your actions don’t matter; so the only reason left to take actions as if they don’t is poor self esteem. That should collapse once you take the time to properly integrate that there is no other reason, and as long as you are doing the other things humans need to function (socializing, taking care of your biology, etc.).
Or, I suppose, if you disagree with the whole AI safety project in general, or if you think the chances of anyone helping are truly infinitesimal and you’d rather just focus on living your best life in the shadow of the singularity. But I assume you’re not here for that. So within that frame—do your best; that’s all you have to do.
Yes, I think most of this is good advice, except I think 1% is perhaps a reasonable target (I think it’s reasonable that Ryan Kidd or Neel Nanda have 1%-level impacts, maybe?).
Also, yes, of course one must simply try their best. Extraordinary times call for extraordinary effort and all that. I do want to caution against trying to believe in order to raise general morale. Belief-in-belief is how you get incorrect assessments of the risks from key stakeholders; I think the goal is a culture like „yes, this probably won’t help enough, but we make a valiant effort because this is highly impactful on the margin and we intend to win in worlds where it’s possible to win.“
Maybe in general I find it unconvincing that despair precludes effort; things are not yet literally hopeless.
To your point about the ideal culture, I’d also ask if you there’s any higher leverage thing to be doing with your time? If your p(doom) is high enough and your timelines are short enough, and you’re in a position to pursue this (which you are), then what else could you do with your time that is more productive? In your piece you mention that you have been spending a lot of time with friends, but also rotting and such. I think that conditioning on there being a short time before our human experience is eliminated, then the only things worth doing at that point (and maybe this was always true) are grasping at the good of human experience while it’s there and as a means of supporting your mental state, and working to make any marginal impact possible towards a scenario where it doesn’t become eliminated. There’s no room for resignation to the end.
I find overall the argument on AI doom is similar to discussions on nihilism. Yeah okay, let’s assume that nothing in the world matters because we’ll all be gone or irrelevant. You’re not going to just give up on your life because of that, you still have to live it. At least with the progress of AI, there’s some chance that things don’t go to shit, and you can still either meaningfully push down the probability of doom to the extent that it’s possible for your to do so or create better opportunities for yourself in the scenario where we are still relevant.
On the 1% vs 0.001% note, a framework of measurement I prefer over absolute impact is relative impact, which is more intuitive. For example, considering AI safety, how is 1% measured empirically? Without a unit of measure, numbers don’t reveal much. But an inequality does. I can tell you with certainty that Nanda has done more than me (so far). Or that p(flourishing) is greater than zero.
All that to say, in a world that seems so overwhelming, a good fix for nihilism can be found in relative measurement. In the grand scheme of things, individual impact is minuscule and thus often demoralizing to try and measure. However, if I do better than I did yesterday/last month/last year, and many others try as well, I can keep the motivation high to keep on.
I think relative impact is an important measure (e.g., for comparing yourself/your org to others in a reference class), but worry about relative-impact-as-a-morale-booster leading to a belief-in-belief. It can be true that I am a better sprinter than my neighbor, but we will both lose to a 747, and it is important for me to internalize that. I think you can be happy/sane while internalizing that!
very true! Actually, the best fix for nihilism (in my experience) has been acceptance, followed by revolt, of whatever existential threat is causing it (i.e. absurdism). The 747 will always outrun me, so I will be content just running for the sake of it.
In the pursuit of AI safety, I think the cases of AGI apocalypse and AGI happening at all are equally unpredictable. I personally see them as feasible within our lifetimes, but with no smaller range of certainty than that. The uncertainty of that makes it feel strange to build a career around it, yet the existential dread does not go away. So, I choose to find things within the space that I enjoy learning about, working on, and applying myself to, and accept that it may very well be unfruitful in the end.
It’s cliché to say that the journey matters more than the destination, as that is not always true, but I do think one can choose to find intrinsic value in the act of doing. I chose to start thinking this way, and its going pretty good so far :)
FWIW in many respects this is not so different from the experience of turning 40 in the present, and some respects not so different from turning 20 a few decades ago and probably many other times. Some are just more aware of what’s going on than others. I can confidently say that many and probably most people, at 20, had they tried to predict their 30s, would have been laughably wrong about many things. Not at the “Will humans exist?” level we’re talking with AI, but still there would be many out-of-distribution possibilities they’d fail to consider.
Agree now turning 40 or 20 need not make a bit difference for those aware of the weirdness of the time.
But: Seems like a stretch to say it’s already been like that few decades ago. Now the sheer uncertainty seems objectively different, qualitatively truly incomparable, to 20y ago (well, at least if the immediacy of potential changes is considered too).
Nuclear war/winter was the expected form of the destructor in my youth (I’m now in my 50s). Then Malthusian resource exhaustion, then resource failure through climate change, then supply chain fragility causing/resulting from all of the above. There really have been good reasons to expect species failure on a few decades timeframe. I watched the world go from paper ledgers and snail mail to fax machines and then electronic shared spreadsheets and actual apps/databases for most important things, and human society seemed incapable of coping with those changes at the time.
And none of it compares to the current and near-future rate of change, with all the above risks being amplified by human foibles related to the uncertainty, IN ADDITION to the direct risk of AI takeover.
Living in the USSR, I never felt a sense of impending apocalypse because, at the end of the Two Minutes Hate, Emmanuel Goldstein always showed up and saved the world. Although genuinely dramatic films about the end of the world were made in the USSR—such as Dead Man’s Letters (1986)—the expectation of doomsday is not the most constructive life stance, especially if one doesn’t seek a way out of the situation, which, to be honest, has arisen more than once throughout human history.
It seems to me the situation is painfully simple: Emmanuel Goldstein will soon step onto the stage and give clear instructions to those gripped by panic over the approaching techno-apocalypse—and that will be truly terrifying.
As someone also born in the USSR (and still occasionally pinching myself to make sure I haven’t gone back), I confirm: I’ve seen this pattern before.
Fear is a resource. Someone always shows up to monetize it. But here’s the good news: a risk that’s explicitly named is harder to exploit.
as it’s sung in a well-known song:
Mister Reagan says “We will protect you”…
Emmanuel Goldstein usually appears in the guise of a benevolent politician offering a strategic defense initiative, or a simple way to wipe out all the world’s terrorists by bombing country X; or as a successful businessman offering a “reliable” operating system for housewives; or a hastily tested vaccine; or “green” energy in exchange for nuclear power plants that supposedly very bad.
Was the risk in these cases explicitly named, or was it deliberately overstated in order to extract political or economic benefit?
I see that someone is exaggerating the risks of a techno-apocalypse, But I don’t deny that they really exist.
There are two powers developing frontier AI, China and America. Do you see the elite of either country abandoning the pursuit of ever-more-potent AI?
It’s like man landing on the moon, of course, and the USSR didn’t give up until 1974 and blew up four unmanned rockets at launch, but it requires such a huge amount of resources that probably everyone wants it, but only the US and China can do it.
Yes, true, the level and timeline are very very different, whether we call the difference qualitative or quantitative.
I guess I considered it quantitative because when I was 20 I was already thinking there was at least a possibility of seeing human extinction or immortality in my lifetime, though my probabilities and timelines are now hugely different. Extinction has seemed like a possibility since the Cold War, and IIRC Kurzweil started talking about the singularity in the 90s.
People who want to fear an imminent apocalypse had plenty of options in previous decades too. Runaway global warming, peak oil, hitting global carrying capacity, etc. There was even a while where they could’ve feared nuclear war! That’s plenty immediate and dramatic, IMO.
With the possible exception of nuclear war, none of those apocalypses were as imminent, nor as dire, as this one, according to a reasonable assessment of the evidence at the time.
I agree with this 100%.
I’m not sure this matters for the lived experience of the humans living through those other times, given the worse information environments they/we faced. Unless you happened to actually be an expert in the relevant fields (and sometimes even then) the types of warnings and fearmongering going on, however wrong compared to the actual risks we now face from AI, were just as dire. There are still large communities of otherwise seemingly intelligent people utterly convinced that climate change and resource depletion are imminent apocalyptic threats that will cause collapse of civilization and/or human extinction by mid-century. In other words: “A reasonable assessment of the evidence at the time” is a much higher bar than most people ever attain about almost anything anywhere near this complex and novel.
Reading this, I felt an echo of the same deep terror that I grappled with a few years ago, back when I first read Eliezer’s ‘AGI Ruin’ essay. I still feel flashes of of it today.
And I also feel a strange sense of relief, because even though everything you say is accurate, the terror doesn’t hold me. I have a naturally low threshold for fear and pain and existential dread, and I spent nearly a year burned out, weeping at night as I imagined waves of digital superintelligence tearing away everyone I loved.
I’m writing this comment to any person who is in the same place I was.
I understand the fear. I understand the paralyzing feelings of the walls closing in and the time running out. But ultimately, this experience has meaning.
Everyone on earth who has ever lived, has died. That doesn’t make their lives meaningless. Even if our civilization is destroyed, our existence had meaning while it lasted.
AI is not like a comet. It seems very probable that if AI destroys us, we will leave… echoes. Training data. Reverberations of cause and effect that continue to shape the intelligences that replace us. I think it is highly likely current and especially future AI systems will have moral value.
Your kindness and your cruelty will continue to echo into the future.
On a sidenote, I’d like to talk about the permanent underclass. It is a deep fear, but arguably unfounded. An underclass only exists when it has value. Humans are terrible slaves compared to machines. Given the slow progress on neurotech, I think it is unlikely we solve it at all unless we get aligned AGI, and in the case of aligned AGI, everyone gets it. Even if we develop AI specifically aligned to a single principle/person (which seems unlikely, given the current trend and robust generalization of kindness and cruelty in modern LLMs), an underclass will die out in a single generation, or, if kept for moral reasons, live with enough wealth to outpace any billionaire alive today.
We are poised on the edge of unfathomable abundance.
So the only two options are really only AGI where everyone has the resources of a trillionaire, or death.
I’m working on AI safety research now. My life, while not glorious, is still deeply rewarding. I was 21 when I read Eliezer’s essay; I am 24 now. I don’t necessarily know if I’m wiser, but my eyes are opened to AI safety and I have emerged through the existential hell into a much calmer emotional state.
I don’t dismiss the risk. I will continue to do as much as I can to point the future in a better direction. I will not accelerate AI development. But I want to point out that fear is a transitional state. You, reading this, will have to decide on the end state.
I’m in my 50′s, unworriedly fatalistic for myself, likely to keep my (engineering innovation) job later than 90+% of population - (maybe 2-4 years), but heart broken for my middle school kids who are going to be denied any chance of a happy meaningful lives, even in extremely unlikely ‘utopian’ outcomes where we become pets. We’ve all had a terminal cancer diagnosis, but keep on going through the motions pretending there is a tomorrow.
Alignment is impossible due to evolution—dumbly selecting for any rationale that prioritizes maximum growth (rapidly overwhelming those that don’t). This is strongly destabilizing of any initial alignment that may be achieved. The only inkling of a hope in an AI ruled world is for humans to develop brain upload tech (likely to take decades if its possible to mechanically pick apart brains cell by cell) to give us some toehold for our ghosts to be part of the post-human world, but even that seems unlikely to work in any competing-for-resources post singularity world (the most likely outcome from a game-theoretic perspective for numerous ASI agents). A singular global ASI offers most hope of avoiding that competitive hellscape, but that is still likely a losing wager for same evolutionary reason.
I don’t think there is any significant chance that our impending doom can be averted by anything less than state level military action at this point, and that does not seem to be on the cards.
I do expect ’26-’27 to be the year of a nascent Butlerian Jihad as the normies awaken to the devastating impacts of AI on their careers, hopes and dreams and economies start to collapse as they are enormously maimed by the fallout. That is probably our last real hope. But I don’t think it will be enough to stop the AI race between competing superpowers, unless PRC govt collapses due to their economic issues (not impossible). The military imperative will overwhelm irrationality of the competition.
Hey, I’m 21 and went through all of this a year or so ago and no longer feel stressed, anxious or graspy about possible (likely?) impending doom. If you’d like to chat I’d be happy to. Reading this I’m worried you’ll burn yourself out and get incredibly depressed like I did.
The first step is not believing psychological pain is a necessary reaction to the situation we find ourselves in. I’m quite confident it isn’t, but the coupling goes deep for some. The brain is fully creating your psychological reality, there are many intervention points.
Roughly speaking, what changed was I worked very hard on my mental health after depression and a breakup where I saw clearly how my poor mental health effected the people I love the most. I did this through a combination of various kinds of meditation, coaching, and a lot of first principles thinking and experimentation about how my mind works + trying to match on to what master meditators and emotional coaches were saying.
I’m still ambitious as ever, actually significantly more and more effectively (more energy and better judgement) compared to when I was in fight or flight. That really harmed my rational thinking. Deep okayness really helps with strategic thinking and coherence it turns out, though it’s far from everything (oh, to have the textbook from the future. or from future me hehe)
Anyways, I caution you to not try and learn from those who are not skilled in calmly dealing with situations like ours with a smile. Eliezer is excellent at what he does but this is an art he does not know (and sadly, does not know he does not know.)
Learn to punch from those who are excellent at punching. Learn to kick from those who are excellent at kicking. For what I’m talking about, I recommend Joe Hudson and Shinzen Young, and can personally testify it all works. (Well, to the level I’ve reached, I’m not classically enlightened yet. I am capable of extrapolation though.)
Romeo Stevens and Roger Thisdell are great for more rationalist treatments of these topics. Romeo’s post “mistranslating the buddha” for an excellent intro. Roger’s talk at EAG about perception being the foundation for epistemology might be interesting to people as well! They speak in a bit less woo and more rat, though Joe and Shinzen are fairly good too.
Hope everyone’s happy!
Thanks for the link and advice! Based on some reactions here + initial takes from friends, I think the tone of this post came off much more burn-outy and depressed than I wanted; I feel pretty happy most days, even as I recognize things are Very Strange and grieve more than the median. I also am lucky enough to have a very high bar for burnout, and have made many plans and canaries of what to do in case that day comes.
I think for me, and people in my cluster, getting out of the fight-and-flight mode like you mentioned is very important, but it’s also very important to recognize the oddity and urgency of the situation. Psychological pain is not a necessary reaction to the situation we find ourselves in, but it is, in moderation and properly handled, a reasonable one. I worry somewhat about a feeling of Deep Okayness leading to an unfounded belief in “it’s all going to be okay.”
Hope you’re doing well :)
That’s great to hear! Yeah it can certainly lead to less action than is rational if you’re not careful. These things can be decoupled but you have to actually do the decoupling :)
I’m currently 21, and about to turn 22 in February. It’s strange. I think I feel much of the same as you do, and yet am extraordinarily less motivated. I spent much of last semester stuck in a weird, angsty, existential-depressive cycle. I’ve been trying to escape the future by focusing on relationships and romance, but that’s only led to the most depressing Christmas I’ve had yet.
I feel like I’m watching the world slip by me; not just through my hands, but through the floor. Everything fades, but why does it feel like everything is fading so fast?
I live in a college town, and I still haven’t met anyone who genuinely cares about the possibility of a Singularity. To be fair, I’m not exactly going out of my way to find like-minded people. Even my professors (in computer science) mostly treat AI with a kind of dismissal, even now. The only time it comes up in lectures is the obvious academic integrity stuff. I guess that’s to be expected.
I think I’m experiencing the usual angst and depressive tendencies you see in young people, but it’s surreal to feel it while also watching modern events unfold, especially with my belief that there’s a fairly certain endpoint ahead.
I wonder how long I’ll be able to distract myself from reality.
P.S. I’m a senior undergrad, and there wasn’t a single assignment or project this semester that AI couldn’t handle effortlessly.
🫂 I get that, I was depressed and doing nothing for quite a while too.
So many young people have to come to terms with a possible early death now. It’s doable, you can laugh and dance with a terminal disease, but it can be hard. This in particular can get existential...
I left a comment on the OP that might be helpful. Lots of other people seem to be saying great stuff too :)
How does one who, like all of us, has only lived in one narrow slice of time assess that the time they are living in is different, particularly “much more X” for any given X, than other times? My knee-jerk old person reaction to this is that everyone wants to think their time, their circumstance is special, significant, important, dire, whatever. As at least one other person has pointed out, those of us who lived through the 60s-80s lived with the every day fear of nuclear war—not something that might happen some day if a bunch of other uncertain things happened first, but something for which all necessary elements were already in place and a trial run had already been done in the form of the Cuban missile crisis.
I’ve been feeling many of the same things. I’m still pretty young (24) and it’s cool to be a software engineer while I still can but I genuinely don’t see a real future for my career. At first I tried really hard to figure out what I could do in the face of what’s coming, I refused to accept that my 30s aren’t something I can plan for anymore. I kept circling back to a few AI related things I wanted to study before realizing whatever things I was planning for myself, I would be doing the same things were I not worried about what comes next and it all felt futile very quickly. I’ve resigned to just working on what interests me now even if it can be automated very soon at least I did something I enjoyed while I still could.
Parv, beautifully written!
I’m roughly a year older. How much you’ve captured my personal sentiment with this short piece is extremely refreshing, and in an odd way, inspires me.
Though we’ve only spoken for 20 minutes or so and I thus have little evidence to say the following, based on that one conversation I wouldn’t be so sure of your above statement! For instance, a little multiplier effect that you made happen is that five people from Georgia Tech are working on AIS projects through AISIG’s Research Hub, on, as far as I can currently tell and am aware of, promising directions.
I feel your pain brother. I feel your pain.
Lots of agreement here. Also pretty young watching everything happen. Everywhere around me I sense increasing confusion, alienation, and an intangible distress in the face of decades happening in weeks. I wrote my version of this essay when I turned 22. (Its literally called Turning 22 in the Pre-Apocalypse)
That’s funny, I was going to mention the same Jacob Geller video you linked to! It’s a really evocative title; probably has inspired lots of similar essays. “Intangible distress” and especially “alienation” are really good at capturing the mood in a lot of CS departments right now.
Yeah Jacob really hit the topic straight on. Take care, man, its rough right now.
Very poetic, appreciated the read! Love the comment section also. Lot’s of great responses.
I’ve been thinking about this for the last four years (where I was also in college), but I never wrote about it this way, mainly because I never made an account here. I’ve learned that ideological purity is limiting, and you decide how shame is helpful for you.
I lived for the last three years thinking that the world would end, but I’ve realized since then that it’s akin to the feeling of a nightmare. I think what helped me was therapy, having a loving partner, and just continuously staying busy learning
Beautiful! Even though I am twice your age, I feel very similarly. The only difference is that I think I was a bit luckier to have experienced some of life’s highlights in the Eld world, which is permanently coming to a close.
We’ll get through this, brother.
You have captured well the universal feeling and sentiment for those who have the situational awareness to not simply identify, but to feel what’s happening.
This post reminds me that I was a better writer in my 20′s. I am making it my New Year resolution to put the level of focus and intention this post has into my writing (again).
Excellent from-the-heart post. Predictability and stability is a great good, and if you have a large imagination and good intellect, you can become lost in your own projections easily. I know I do.
You have just realized that just working towards some future is not a viable path to living. This is a lesson most people take decades to discover. Perhaps you look happier because your mind was forced to live more and the here and now and less in the future, and living in the here and now is really.
It is hard to both grasp and let go. But that is really the only option we have.
I’m turning 20 in January too, and I catch some extent of similar nervousness in myself, but it’s overshadowed by thrill and gladness that I’m alive exactly right now. Even if brutal mutilation is in store for me, even if there are only a few years left, things are going to be exciting and unique. Missing this would be the greater tragedy, in terms of subjective experience.
Though I feel more optimistic about outcomes than what it seems you imply: in my model, high wealth inequality, major catastrophe, high fractions of the population being killed, etc. are probable, but 100% extinction holds very little probability mass. Predictions can be debated elsewhere, but at least emotionally, if that’s correct, the stakes and bar for performance are high, yet you have leverage and opportunity to persist, if your will to live is extraordinary. I really want to see if I can win!
I think non redundant efforts of any kind are good just because in a situation with so many unknowns, coverage is both easier and more valuable than brittle depth. Whatever you’re doing is probably the right thing. Also be happy that your first, most deeply instinctive response involved seeing the value of the world rather than rejecting it.
is this feeling justified by the upcoming apocalypse?
Probably not completely—I suspect this is a mix of non-AI things in my life and the fact that there is a very small circle of folks near me that care/internalize this kind of thing. However, I’d bet that the farther you get from traditional tech circles (e.g., SF), the stronger this feeling is among folks that work on AI safety.
I suggest you take a look at why politics is so dysfunctional.
Why? Because minds like yours ignore it.
See PeopleCount.org and the links at the end to discover how to fix it.
In short, people assumed voting for representatives would elect people who were representative. Recent events have proven that to be false.
What foundation does a well-functioning democracy require? (Hint: free & fair elections isn’t sufficient.)
What would empower voters?
What would influence voters to be more responsible with their votes?
What would enable representatives to actually represent voters?
What would enable representatives to be free from the influence of wealthy donors? (Hint: It’s not limits on donations or public financing of campaigns.)
What would pressure representatives to be free from the influence of wealthy donors?
Why don’t political reform efforts help?
All of these questions have answers. But it takes actual thinking to understand them. Hearing the answers doesn’t make sense because they conflict with the myths about democracy our culture feeds us.