AKA My Most Likely Reason to Die
Young is AI X-Risk
TL;DR: I made a model which takes into account AI timelines, the probability of AI going wrong, and probabilities of dying from other causes. I got that the main “end states” for my life are either dying from AGI due to a lack of AI safety (at 35%), or surviving AGI and living to see aging solved (at 43%).
Meta: I’m posting this under a pseudonym because many people I trust had a strong intuition that I shouldn’t post under my real name, and I didn’t feel like investing the energy to resolve the disagreement. I’d rather people didn’t de-anonymize me.
The model & results
I made a simple probabilistic model of the future, which takes seriously the possibility of AGI being invented soon, its risks, and its effects on technological development (particularly in medicine):
Without AGI, people keep dying at historical rates (following US actuarial tables)
At some point, AGI is invented (following Metaculus timelines)
At the point AGI is invented, there are two scenarios (following my estimates of humanity’s odds of survival given AGI at any point in time, which are relatively pessimistic):
We survive AGI.
We don’t survive AGI.
If we survive AGI, there are two scenarios:
We never solve aging (maybe because aging is fundamentally unsolvable or we decide not to solve it).
AGI is used to solve aging.
If AGI is eventually used to solve aging, people keep dying at historical rates until that point.
I model the time between AGI and aging being solved as an exponential distribution with a mean time of 5 years.
Using this model, I ran Monte Carlo simulations to predict the probability of the main end states of my life (as someone born in 2001 who lives in the US):
I die before AGI: 10%
I die from AGI: 35%
I survive AGI but die because we never solve aging: 11%
I survive AGI but die before aging is solved: 1%
I survive AGI and live to witness aging being solved: 43%
Here’s what my model implies for people based on their year of birth, conditioning on them being alive in 2023:
As is expected, the earlier people are born, the likelier it is that they will die before AGI. The later someone is born, the likelier it is that they will either die from AGI or have the option to live for a very long time due to AGI-enabled advances in medicine.
Following my (relatively pessimistic) AI safety assumptions, for anyone born after ~1970, dying by AGI and having the option to live “forever” are the two most likely scenarios. Most people alive today have a solid chance at living to see aging cured. However, if we don’t ensure that AI is safe, we will never be able to enter that future.
I also ran this model given less unconventional estimates of timelines and P(everyone dies | AGI), where the timelines are twice as long as the Metaculus timelines, and the P(everyone dies | AGI) is 15% in 2023 and exponentially decays at a rate where it hits 1% in 2060.
For the more conventional timelines and P(everyone dies | AGI), the modal scenarios are dying before AGI, and living to witness aging being solved. Dying from AGI hovers around 1-4% for most people.
Without AGI, people keep dying at historical rates
I think this is probably roughly correct, as we’re likely to see advances in medicine before AGI, but nuclear and biorisk roughly counteract that (one could model how these interact, but I didn’t want to add more complexity to the model). I use the US actuarial life table for men (which is very similar to the one for women) to determine the probability of dying at any particular age. For ages after 120 (which is as far as the table goes), I simply repeat the probability of dying at age 120, which is 97.3%. This is wrong but I expect the final estimates to mostly be correct for people born after 1940.
Metaculus doesn’t have a way to draw samples from distributions, so I messily approximated the function by finding the PDF at various points and playing with a lognormal distribution until it looked roughly right. The median of the distribution is 2034. I fit a lognormal distribution this Metaculus question, discarding years before 2023. This is what the distribution looks like:
For the conventional estimate, I just multiplied each sample by three to make the timelines three times longer (so a median of 2056).
Probability of surviving AGI
This is a very rough estimate of my probability of surviving AGI, conditioning AGI being invented at any particular time. Here’s what the graph looks like. My estimate levels out at 98%, the conventional one approaches 100%. I actually think it approaches 100% but 98% is close enough to my estimate.
For the conventional estimate, I simply have an exponentially decaying probability of not surviving which starts at 15% given AGI in 2023 and drops to 1% by 2060.
Probability that, conditioning on surviving AGI, we eventually solve aging
I have this at 80% without much thinking about it. I think the main scenarios where this doesn’t happen involve us deciding to replace humans who are currently alive with something we consider more valuable (which is incompatible with keeping currently living humans around), or succumbing to some existential risk after we survive AGI.
Time between surviving AGI and solving aging
I model this as an exponential distribution with a mean time of 5 years. I mostly think of it as requiring a certain amount of “intellectual labor” (distributed lognormally) to be solved, with the amount of intellectual labor per unit of time increasing rapidly with the advent of AGI as its price decreases dramatically. I have very wide error bars about how much intellectual labor to be invested, and how much intellectual labor output will speed up given AGI, so I mostly just don’t want to add more moving parts to the model, and instead throw an exponential distribution at it. In the jupyter notebook, it’s easy to change this parameter with your own distribution.
No assumptions about continuous personal identity
I think there are some pretty weird scenarios for how the future could go, including ones where:
biological humans live for so long that calling them the “same person” applies much less than it does for people alive today, who generally live less than 100 years and change relatively little over time
humans become digital, and it’s unclear whether the uploading process “makes a new person” or “duplicates an existing person”
the human experience changes so much over time that we enter an unrecognizable state, where minds might relive the same moment over and over, or exist for very short periods of time, or change so quickly that there’s barely any continuity over time.
I’m not trying to capture any of these questions in the model. The main question this model is trying to answer is “Can I expect to witness aging being solved?” (whatever “I” means).
Appendix A: This does not imply accelerating AI development
One might look at the rough 50⁄50 chance at immortality given surviving AGI and think “Wow, I should really speed up AGI so I can make it in time!”. But the action space is more something like:
Work on AI safety (transfers probability mass from “die from AGI” to “survive AGI”)
The amount of probability transferred is probably at least a few microdooms per person.
Live healthy and don’t do dangerous things (transfers probability mass from “die before AGI” to “survive until AGI”)
Intuitively, I’m guessing one can transfer around 1 percentage point of probability by doing this.
Do nothing (leaves probability distribution the same)
Preserve your brain if you die before AGI (kind of transfers probability mass from “die before AGI” to “survive until AGI”)
This is a weird edge case in the model and it conditions on various beliefs about preservation technology and whether being “revived” is possible
Delay AGI (transfers probability from “die from AGI” to “survive AGI” and from “survive until AGI” to “die before AGI”)
Accelerate AGI (transfer probability mass from “survive AGI” to “die from AGI” and from “die before AGI” to “survive until AGI”)
I think working on AI safety and healthy living seem like a much better choice than accelerating AI. I’d guess this is true even for a vast majority of purely selfish people.
For altruistic people, working on AI safety clearly trumps any other action in this space as it has huge positive externalities. This is true for people who only care about current human lives (as one microdoom 8,000 current human lives saved), and it’s especially true for people who place value on future lives as well (as one microdoom = one millionth of the value of the entire long term future).
This is a very simplified view of what it means to accelerate or delay AGI, which ignores that there are different ways to shift AGI timelines that transfer probability mass differently. In this model I assume that as timelines get longer, our probability of surviving AGI increases monotonically, but there are various considerations that make this assumption shaky and not generally true for every possible way to shift timelines (such as overhangs, different actors being able to catch up to top labs, etc.)
Appendix B: Most life extension questions on Manifold are underpriced
I assume that at least 60% of people with the option to live for a very long time in a post-AGI world will decide to live for a very long time. There are a bunch of life extension questions on Manifold. They ask whether some specific people, aged around 30 in 2023, will live to an age of 100, 150, 200, or 1000.
I think the probabilities for all of these ages are within single percentage points for any particular person, as I think it’s pretty unlikely that someone lives to 100 using AGI-enabled advances, and decides to not live until 1000 in that same world.
I think all of these questions are significantly underpriced. One way to get the probability of someone born around 1990 living to 1000 years old, is to get the probability of them “having the option to live forever” (which is 40% according to this model) and multiply it by the probability that they would say yes (which I think is around 60%). This means the probability of someone born in 1990 living for more than 1000 years is around 24%.