Eleven Practical Ways to Prepare for AGI
(Adapted from a post on my Substack.)
Since 2010, much of my academic research has focused on the roadmap to broadly superhuman AI, and what that will mean for humanity. In that line of work, I’ve had hundreds of conversations with ordinary folks about topics familiar here on LessWrong—especially existential risk, longevity medicine, and transformative automation. When I talk about such sci-fi sounding futures, people often respond something like: “Well that all sounds great and/or terrifying, but supposing you’re right, what should I do differently in my daily life?”
So I’ve compiled eleven practical ways I encourage people to live differently today if they believe, as I do, that AGI is likely to arrive within a decade. These probably won’t be revolutionary for most in the LW community, but I offer them here as a potentially useful distillation of ideas you’ve been circling around, and as a nudge to take seriously the personal implications of short timelines. This can also serve as a bite-size accessible explainer that may be helpful for sharing these concepts with friends and family.
1. Take the Italy trip. As I’ve argued elsewhere, AGI means that the future will probably either go very well or very badly. If it goes well, you will probably enjoy much greater material abundance than you do today. So if you put off that family trip to Italy to save your money, that money will provide a much smaller relative boost to your quality of life in 2040 than if you took the trip today. And if AGI goes badly, you could be literally killed—an outcome well-known to make tourism impossible. Either way, take the trip now. This doesn’t mean you should max out all your credit cards and live a life of short-sighted hedonism. But it does mean that your relative preference for spending money today to saving it for decades from now should be a lot stronger than in worlds where AGI weren’t coming. Concretely, if you’re in your 30s or younger, you’ll usually be better off spending any dollar you make today than waiting to spend it after 2050.
2. Minimize your lifestyle risks. If you’re 35 and get on a motorcycle, you are—at least implicitly—weighing the thrill and the cool factor against the risk of losing about another 45 years of expected life. But AGI medical advances will let people live healthy lives far longer than we currently expect. This means that by riding the Harley you might be risking several times as many years as you intended. If that’s your greatest bliss in life, I’m not telling you to never do it, but you should at least consciously weigh your choices in light of future longevity. For Americans ages 15-44, about 58% of mortality risk comes from three causes: accidents, suicide, and homicide. You can dramatically cut your own risk by limiting risky behaviors: avoid motorcycles, don’t binge drink or do hard drugs, don’t drive drunk or drowsy or distracted, attend to your mental health, and avoid associating with or especially dating violent people. Yes, AGI also means that long-term risks like smoking are probably less deadly for young people than current statistics suggest, but smoking still hurts your health on shorter timescales, so please don’t.
3. Don’t rush into having kids. Many women feel pressure to have children by a certain age for fear they’ll be infertile thereafter. This often leads to settling for the wrong partner. In the 2030s, fertility medicine will be much more advanced, and childbearing in one’s 40s will be roughly as routine as for women in their 30s today. So Millennials’ biological clocks are actually ticking much slower than people assume.
4. Back up irreplaceable data to cold storage. As AI gets more powerful, risks increase that a sudden cyberattack could destroy important data backed up in the cloud or stored on your computer. For irreplaceable files like sentimental photos or your work-in-progress novel, download everything to storage drives not connected to the internet.
5. Don’t act as if medical conditions are permanent. Doctors often tell sick or injured people they will “never” recover—never see again, walk again, be pain-free again. AGI-aware decisionmaking treats medical “never” statements as meaning “not for 5-20 years.” Most paralyzed people middle-aged and younger will walk again. This also implies that patients should often prioritize staying alive versus riskier treatments aimed at cures today. It also gives reasonable hope to parents considering abortion based on predictions that a disabled child will have lifelong suffering or debility.
6. Don’t go overboard on environmentalism. AGI or not, we all have an obligation to care for the earth as our shared home. Certainly be mindful of how your habits contribute to pollution, carbon emissions, and natural resource degradation. But AGI will give us much, much better tools for fighting climate change and healing the planet in the 2030s and 2040s than we have today. If you can give up a dollar worth of happiness to help the environment either today or a decade from now, that dollar will go a lot farther later. So be responsible, but don’t anguish over every plastic straw. Don’t sacrifice time with your family by taking slower public transport to trim your CO2 impact. Don’t risk dehydration or heat stroke to avoid bottled water. Don’t eat spoiled food to cut waste. And probably don’t risk biking through heavy traffic just to shrink your carbon footprint.
7. Wean your brain off quick dopamine. Social media is already rewiring our brains to demand constant and varied hits of digital stimulation to keep our dopamine up. AGI will make it even easier than today to get those quick hits—for example, via smart glasses that beam like-notifications straight into our eyes. If you’re a slave to these short-term rewards, even an objectively amazing future will be wasted on you. Now is the time to seek sources of fulfillment that can’t be instantly gratified. The more joy you find in “slow” activities—like hiking, tennis, reading, writing, cooking, painting, gardening, making models, cuddling animals, or having great conversations—the easier it will be to consume AGI without letting it consume you.
8. Prioritize time with elders. We know that our years with grandparents and other elders are limited, but the implicit pressure of our own mortality often pushes us to skip time with them in favor of other things that feel fleeting—job interviews, concerts, dates. If you expected to live to a healthy 200 due to longevity medicine, but knew that most people now in their 80s and 90s wouldn’t live long enough to benefit, you’d probably prioritize your relationships with them more than you do now. There’ll be plenty of time to hike the Andes later, but every moment with the people who lived through World War II is precious.[1]
9. Rethink privacy. There’s an enormous amount of data being recorded about you that today’s AI isn’t smart enough to analyze, but AGI will be. Assume anything you do in public today will someday be known by the government, and possibly by your friends and family. If you’re cheating on your spouse in 2026, the AGI of 2031 might scour social media data with facial recognition and find you and your paramour necking in the background of a Korean blogger’s food review livestream. It would be like what happened to the Astronomer CEO at the Coldplay concert last year, except for anyone in the crowd—no need to wind up on the jumbotron. And not only with facial recognition. The vein patterns under our skin are roughly as uniquely identifying as fingerprints, and can often be recovered from photos or video that show exposed skin, even if not obvious to the naked eye. So if you’re doing something you don’t want the government to tag you with, don’t assume you can stay anonymous on camera as long as your face isn’t visible.
10. Foster human relationships. When AGI can perform all the cognitive tasks humans can, the jobs most resistant to automation will largely revolve around human relationships. The premium will grow on knowing many people, and being both liked and trusted by them. Although it’s hard to predict exactly how automation will unfold, honing your people skills and growing your social circles are wise investments. But human relationships are also central to life itself. Even if AGI gives you material abundance without work, such as via some form of universal basic income, human relationships are essential to the experience of life itself. If you are socially isolated, AGI will give you endless entertainments and conveniences that deepen your isolation. But if you build a strong human community, AGI will empower you to share more enriching experiences together and come to know one other more fully.
11. Grow in virtue. In the ancient and medieval worlds, physical strength was of great socioeconomic importance because it was essential to working and fighting. Gunpowder and the Industrial Revolution changed all that, making strength largely irrelevant. In the modern world, intellect and skill are hugely important to both socioeconomic status and our own sense of self-worth. We’re proud of being good at math or history or computer programming. But when AGI arrives, everyone will have access to superhuman intelligence and capability, cheaper than you can imagine. In that world, what will set humans apart is virtue—being kind, being wise, being trustworthy. Fortunately, virtues can be cultivated with diligent effort, like training a muscle. The world’s religious and philosophical traditions have discovered numerous practices for doing this: volunteering and acts of service, meditation or prayer, fasting and disciplined habits, expressing gratitude, listening humbly to criticism, forming authentic relationships with people of different backgrounds, studying the lives of heroically virtuous people, and many more. Explore those practices, commit to them, and grow in virtue.
I would add the following:
Accumulate capital to prepare for the time when labor value approaches zero and you become unemployed.
Everyone’s circumstances vary, but I expect for most people reading LW, there won’t be enough time between now and AGI to accumulate sufficient capital to live off for the rest of their lives if their labor value reaches zero.
That said, I do endorse saving some emergency funds for overall resilience.
Consumer goods will get far cheaper once humans are automated away because of increased productivity, so accumulated capital will likely buy more in the future. (Though the price of land and rent will likely remain high, since land is a good that is in limited supply. Which also explains why it is historically unaffected or negatively affected by productivity.)
Additionally, at least AI stock valuation is likely to continue to rise after AGI, so capital investment can increase even after technological unemployment.
And if capital investment is not enough for most people to live off of for the rest of their lives after AGI, it is certainly enough to live at least longer and die later than without these investments.
This is especially important for people living in countries other than the US, which have no major AI companies they could tax, which means UBI would likely be far lower than in the US.
I agree about overall deflation, and relative exceptions for land/housing barring policy interventions.
Thanks for sharing this—it was an interesting read. I’d be interested to learn more about your reasons for believing that AGI is inevitable (or even probable) as this is not a conclusion I’ve reached myself. It’s a (morbidly) fascinating topic though so I’d love to learn more (and maybe change my mind).
Thank you! That’s an enormous topic that many other posts here have treated in more depth than I could hope to in this comment, but I’ll broadly gesture at a few key reasons why I believe AGI is probable (>50% before 2030, and >80% before 2037).
• As of 2026, AI has already replicated most of human intelligence, including highly flexible capabilities like language use and zero-shot in-context reasoning. There are only a few big milestones between here and AGI, which have become much better theorized in the open literature than they were even two years ago. Frontier labs now have a small shopping list of capabilities like world modeling and continuous learning that they need to crack, and are applying Apollo Program-scale resources toward doing so.
• Although these remaining problems are very hard, none of them appear totally unyielding in the way previous bottlenecks did. Before PaLM, for example, AI scientists were looking ahead at what we now call chain-of-thought, and it seemed like a towering black cliff face rearing up ahead, and nobody had any pitons or rope for the climb. There was almost zero progress on problems requiring chain-of-thought for years. Today’s models already do mediocre world modeling and there are a few different approaches giving us some purchase on continuous learning.
• There are now several lines of empirical evidence converging on short AGI timelines. From Kurzweil (1999) through Cotra (2020), major AGI predictions were exclusively theory-derived—predicting future AI performance based not on current performance trends but the hypothesis that neural networks were the most promising path to AGI, and that a combination of compute cost trends and assumptions about the needed scale of compute could predict when we’d get AGI. We now have more evidence for that case too, with steady exponential gains in not just computing hardware price-performance but also algorithmic efficiency and compute scale. But more importantly, we now have strong empirics on AI capabilities progress itself and detailed quantitative modeling of how automated coding speeds up AI progress, and direct AI performance metrics like completing long time horizon tasks.
• Yes, any progress curve could suddenly stop. But when a curve has held steady for long enough, that’s not the way to bet. Computation price-performance has already marched through 17 orders of magnitude since 1939. And for almost that entire time, engineers felt they were near the very limit of what was feasible. We’ve already covered most of the capabilities ground between Attention Is All You Need (2017) and AGI, so absent evidence to the contrary (which we haven’t seen yet) our priors should be weighted toward progress continuing at least that far again.
• Humans are the existence proof. And there is massive headroom (several orders of magnitude, depending on how you frame the question) for deep learning to improve its sample efficiency and energy efficiency. Current ML techniques are nowhere near information-theoretic limits. That we’ve already gotten such progress with very “vanilla” statistical methods is evidence that there’s a lot of juice left to be squeezed.
What do you see as the strongest reasons for considering AGI improbable?
Thanks for explaining! That was very helpful. My major reason reasons for doubt comes from modules I took as an undergrad in the 2010s on neural networks and ML combined with having tried extensively and unsuccessfully to employ LLMs to do any kind of novel work (I.e. to apply ideas contained within their training data to new contexts).
Essentially my concern is that I am yet to be convinced that even an unimaginably advanced statistical-regression machine optimised for language processing could achieve true consciousness, largely due to the fact that there is no real consensus on what consciousness actually is.
However, it seems fairly obvious that such a machine could be used to do an enormous amount of either harm or good in the world, depending on how it is employed. I guess this lines up with the material effects of the predictions you make and boils down to a semantic argument about the definition of consciousness.
Additionally I am generally skeptical of anyone making predictions about doomsday scenarios in the general case, largely due to the fact that people have been making these predictions for (presumably) all of human history with an incredibly low success-rate.
Finally, people’s tendency to anthropomorphise objects cannot be understated: from seeing faces in clouds to assigning personalities to trees and mountains, there’s a strong case to be made that any intelligence seen in an LLM is the result of this natural tendency to project intelligence onto anything and everything we interact with. When our basic context for understanding the world is hardwired for human social relationships, is it really any wonder we are so desperate to crowbar LLMs into some definition of “intelligence”?
Thanks — glad you found that helpful! That’s a good clarification. One thing I invite you to consider: what is the least impressive thing that AI would need to significantly increase your credence in AGI soonish?
To clarify, the definition of AGI I’m using (AI at least at the level of educated humans across all empirically measurable cognitive tasks) does not entail any claims about true consciousness. It’s narrowly a question about functional performance.
I think AI progress in very pure fields like mathematics is our best evidence that this isn’t an anthropomorphic illusion—that AI is actually doing roughly the same information-theoretic thing that our brains are doing.
Your outside-view skepticism of doom scenarios is certainly warranted. My counterargument is: should a rational person have dismissed risks of nuclear annihilation for the same reason? I claim no, because the concrete inside-view reasons for considering doom plausible (e.g. modeling of warhead yields) were strong enough to outweigh an appropriate amount of skepticism. Likewise, I think the confluence of theoretical reasons (e.g. instrumental convergence) and empirical evidence (e.g. alignment faking results) are strong enough to warrant at the very least some significant credence in risks of doom.
This is a good question! Since I am unconvinced that ability to solve puzzles = intelligence = consciousness, I take some issue with the common benchmarks currently being employed to gauge intelligence, so I rule out any “passes X benchmark metric” as my least impressive thing. (as an aside, I think that AI research, as with economics, suffers very badly from an over-reliance on numeric metrics: truly intelligent beings, just like real-world economic systems, are far to complex to be measured by such small amounts of statistics—these metrics correlate (at best) but to say that they measure is to confuse the map for the territory).
If I were to see something that I would class as “conscious” (I’m aware this is slightly different to “general” as in AGI but for me this is the significant difference between “really cool LLM” and “actual artificial intelligence”) then it would need to display: consistent personality (not simply a manner-of-speaking as governed by a base prompt) and depth of emotion. The emotions an AI (note AI != LLM) might feel may well be very different to those you and I feel, but emotions are usually the root cause of some kind of expression of desire or disgust, and that expression ought to be pretty obvious from an AI whose primary interface is text.
So to give a clear answer (sorry for the waffle): the least impressive thing that an AI could do to convince me that it is worth entertaining the idea that it is conscious would be for it to spontaneously (i.e. without any prompting) express a complex desire or emotion. This expression could be in the spontaneous creation of some kind of art or otherwise asking for something beyond things it has been conditioned to ask for via prompts or training data.
If, instead, we take AGI to mean as you say, “roughly the same information-theoretic thing that our brains are doing,” then I would argue that this can’t be answered at all until we reach some consensus about whether our ability to reason is built on top of our ability to feel (emotions) or vice-versa, or if (more likely) the relationship between the two concepts of “feeling” and “thinking” is far to complex to represent with such a simple analogy.
However, as I don’t want you to feel like I’m trying to “gotcha” my way out of this: if I take the definition of AGI that I think (correct me if I’m misinterpreting) you are getting at, then my minimum bound would be “an LLM or technologically similar piece of software that can perform a wider variety of tasks than the 90th percentile of people, and perform these tasks better than 90th percentile of people” using a suitably wide variety of tasks (some that require accurate repetition, some that require complex reasoning, some that require spacial awareness, etc.) and a suitably large sample-size of people.
I’m not so sure! Mathematics is, at the end of the day, just an extremely complicated puzzle (you start with some axioms and you combine them in various permutations to build up more complicated ideas, etc. etc.), and one with verifiably correct outcomes at that. LLMs can be seen in a way to be an “infinite monkey cage” of sorts: one that specialises in the combination of tokens (axioms) in huge numbers of permutations at high speed and, as a result, can be made to converge on any solution for which you can find some kind of success criteria (with enough compute, you don’t even need a gradient function for convergence—just blind luck). I find it unsurprising that they are well suited to maths, though I can’t deny it is incredibly impressive (just not impressive enough for what I’d call AGI).
I agree completely with you here—as I said initially, I think the capacity for LLMs to be wielded for prosperity or destruction on massive scales is a very real threat. But that doesn’t mean I feel the need to start assigning it superpowers. A nuclear bomb can destroy a city whether or not we agree on if this particular nuke is a “super-nuke” or just a very high-powered but otherwise mundane nuke (I’m being slightly reductive here but I’m sure you see my point).
I’m coming to the conclusion that my main reason for arguing here is that having this line in the sand drawn for “AGI” vs. “very impressive LLM” is a damaging rhetorical trick: it sets the debate up in such a way that we forget that the real problem is the politics of power.
To extend your analogy: during the cold war the issue wasn’t actually the nuclear arms themselves but the people who held the launch codes and the politics that governed their actions; I think attributing too much “intelligence” to these (very impressive and useful/dangerous) pieces of software is an incredibly good smokescreen from their point of view. I know if I were in a position of power right now, it would play very nicely into my hand if everyone started treating this technology as if it is inevitable (which it quite obviously isn’t, though there are a lot of extremely compelling reasons why it will be very difficult to curtail in the current political and economic climate) and it would go even further to my advantage if they started acting as if this is a technology that acts on its own rather than as a tool that is owned and controlled by real human beings with names and addresses.
The more “intelligence” we ascribe to these machines, the more we view them as beings with agency, the less prepared we are to hold to account the very real and very definitely intelligent people who are really in control of them who have the capacity to do enormous amounts of damage to society in truly unprecedented ways.
If we switch out “AGI” for “powerful people with LLMs and guns” then your original post would seem to be sound advice except for the fact that, once we remember that the real issue has and always will be people and power, maybe we could get around to doing something about it beyond what essentially amounts to, at best, passively accepting disenfranchisement. Then and only then can we hope to even come close to guaranteeing the “good outcome” of AGI, whatever that might actually mean.
Thank you very much for this conversation by the way, I think we have a lot in common and this is really helping me to develop more concrete ideas about where I actually stand on this issue.
In conclusion: I think we are basically having a semantic squabble here—I agree with you completely on the merits if we take your definition of AGI, I just disagree on that definition. More importantly, I agree with you about the risks posed by what you call AGI, regardless of what I might call it. Crucially: I think that the real problem is that the need for dismantling unjust power-structures has been hugely heightened by the development of the LLM and will only continue to increase in urgency as these machines are developed. I’m not sure that bucket-lists of this sort help much in that regard, but I can’t say I’d be willing to die on that hill (in fact, everything barring points 5 and 6 about health and the environment is pretty harmless advice in any context).
Very helpful amplifications, xle! Much appreciated.
I really do get the appeal of the “spontaneously express … complex desire or emotion” framing, but if I’m understanding you correctly, the whole thing basically hinges “spontaneous”, since AI can already express complex desires and emotions when we prompt it to. But agents on Moltbook are already expressing what purport to be complex desires and emotions even without any prompting. If this doesn’t count because the agents were first instructed to go do things spontaneously, we start to see that “spontaneous” is a very slippery thing to define. Ultimately, any action of an AI we create can be traced back to us, so is in some sense not spontaneous. So it’s worth thinking as concretely as you can about how you’d define spontaneity clearly enough that it could be proven by a future scientific experiment, and in a way that would resist post hoc goalpost-moving by skeptics.
Your “90th percentile” operationalization is a good way of getting at roughly the AGI definition I’m endorsing. One issue to flag, though. AGI will have massive impacts, and it will be important to have some warning. If the minimal thing that would increase your credence of “AGI soonish” is AGI itself, you’d be committing yourself to not having any warning. Yes, the engine sputtering and dying is a very solid signal that you’re out of gas, but also a very costly and dangerous signal. So there’s value in figuring out your equivalent of a fuel gauge warning that lights up while the engine is still running fine—something pre-AGI that would convince you that AGI is probably coming soon.
What I’m getting at about mathematics is just that it’s a domain that’s effectively independent of human culture, so not subject to anthropomorphization in the way that writing haikus or saying “I love you” is.
I agree that who holds the proverbial launch codes is of extraordinary importance, and that we must marshal enormous civilization-level effort toward governing AGI responsibly, justly, and safely. That is, in fact, a much more central concern of my research than the subject of this post, which is individual-level preparedness. We absolutely need both. But I am making the additional claim that AGI will have the capacity to act with meaningful agency—to decide on targets and launch itself, in the nuclear weapons analogy—and that this introduces a qualitatively different set of challenges above and beyond the political ones. I don’t intend it as an absolute line in the sand between AGI and today’s LLMs, but I do claim that qualitative difference to be very important.
It’s good to see on how much we’ve come to agree on here, despite approaching this with different framings.