I am starting to viscerally feel the possibility that I might be dead in <5 years because of AI. This is pretty new: I had a brief phase like this a few years ago when I first encountered the arguments for AI x-risk. Since then, I’ve developed some barely-above-subconscious coping mechanisms for not being too emotionally disturbed, most of which amounted to reasons why LLMs would not scale to ASI. But these have just stopped being convincing to my emotional brain the past few weeks. I’ve been having nightmares and ruminating a lot. I wonder if this is in the region of how a terminally ill person feels. I am also pretty scared of the fact that this anxiety is likely to become a lot worse as capabilities increase and more of my emotional brain’s cope melts away.
I’m not really sure what I’m trying to accomplish with this post. Perhaps others have been feeling this way and might want to share some tips for not going insane?
I’m also pretty anxious. I kind of imagine that I’m a character in a sci fi novel and I prefer to be the type that does useful stuff despite (or even because) the situation is grim instead of melting down. I guess that’s also loosely Eliezer‘s suggestion in methods of sanity. It doesnt eliminate the anxiety, but it is motivating.
Have you read arguments against the probability you’ll be dead in < 5 years? Or tried searching for weakpoints in these arguments.
Like in general if you mostly listen to arguments for X and few arguments for not-X then X will seem looming and dire. I don’t know what conclusion you should come to, but if you’re in that position I’d try casting your eyes in the other direction.
One thing I’ve found helpful is to try to set up my life in a way that I won’t regret if the world ends in five years. For example, spending lots of time with friends and family, working towards meaningful goals, etc. Though I think this is usually good advice in the (imo more likely) case that world doesn’t end in five years.
I am not an anxious person, by default. Quite the opposite, perhaps I am usually too calm and more anxiety/neuroticism would be a directional improvement. But I too have often been worried by AI, for many years, many a time. Some of it is prosaic: by misfortune of birth, I had to rely on very hard work and cognitive labor to earn the right to stay in a significantly richer, safer country. If my labor becomes obsolete, I have a very real chance of being kicked out and sent back to a country where everyone else is in the same boat that is simultaneously on fire and sinking. This mostly applies to automation-induced unemployment, but I actually need my employment at present, and see no promises that I will be looked after through UBI.
And of course, the risk of everyone dying. More scary, in objective terms, but also not something I can do much about personally. I can try and save money and get citizenship elsewhere while I have the time and runway, but what am I going to do about getting paperclipped?
Within psychiatry, we have a less than perfectly polite term-of-art, which is more likely to be heard in the mess than the clinic: “Shit Life Syndrome”.
“I wanted to diagnose this patient with depression and promise that antidepressants will help, but he told me his wife left him and took the kids, and that she’s suing for the house. He’s been fired from his job, and has now been diagnosed with possibly terminal cancer. Clear case of SLS, if I was in his shoes I’d be depressed too.”
That is the problem. Sometimes there really is reason to worry. But if it does get to the point where it is maladaptive, it might be possible to seek help. Pragmatically, if you think you’re going to die soon, would you rather spend your remaining time curled up in a ball shivering, or doing the things you love while you still have the time?
I’ve once had someone pay me specifically for therapy because of Singularity-induced anxiety, but that was mostly because talking about things helps, not because I can solve the original problem. Both Pagliacci and his doctor are worried about losing their jobs. The latter is not sure if he’s the even bigger clown.
My general advice is that you should do your best not to think about it (easier said than done), and if that doesn’t work, try and sublimate your effort into working hard or just doing something you find productive/enjoyable. If it gets unbearable, then I genuinely ask that you keep the option of medical help in mind. Good luck, I think there are many other people feeling as we do, and that that number will only increase. But things could go well, and we might get a glorious transhumanist utopia too! There is not literally zero upside or things to look forward to, at least from my perspective.
I am a longtime volunteer with PauseAI/PauseAI US. This is the advice I give to every volunteer:
Normal mental health advice still applies. We are limited beings, and we do not have the capacity to emotionally grasp something as immense as the end of the world (thank goodness!). That means normal mental health tools for anxiety and grief are still effective. Listen to mental health professionals and the people who love you.
Action is the antidote to anxiety. If you take action about something you feel anxious about, your anxiety will tend to decrease, because you are happening to it rather than the reverse. And as I often say, Hope goes by the name of Bravery. Don’t just look for hope out there. Don’t passively wait for the cavalry. Be the source of hope, and you will have hope.
If you are in an acute crisis or your action is unsustainable, make becoming well your number one priority. While it’s powerful to take personal responsibility for averting catastrophe, you aren’t of use to anyone if you are self-destructing. You are a valuable part of the humanity we are working to save, and the work will continue to be done while you recover.
Not everyone responds the same way emotionally, but this is a good place to start.
These days, I am anxious when I take action and depressed when I don’t. But I’m not anxious about the end of the world anymore, just about, like, talking to people and preparing presentations and stuff. I’ve done a lot of processing at this point.
That’s how it’s going, but here’s how it started:
Over the course of about a year starting in March 2023, I became increasingly anxious about AI extinction risk. During that time, I donated to AI risk organizations, helped out on AI Safety field-building projects, and became an active digital volunteer in the PauseAI movement. But I didn’t fully leave my comfort zone, and I knew I was holding back. Finally, one evening, I broke down sobbing in the shower, finally really feeling that the world was going to end, and I wasn’t doing enough to stop it. I decided I would do whatever I concluded was the most useful thing to do, even if I really didn’t want to do it.
So I made the decision to start a local group (PauseAI Phoenix), all alone in my state, and commit to outreach and local organizing. I started with flyering, reasoning that even with my social anxiety, it would be hard to screw up handing someone a piece of paper. Even then, I was deeply terrified to engage the public on this issue, expecting to be mocked and humiliated. Instead, almost everyone was nice, and I actually had fun. After that, it became clear to me that fear would never be able to prevent me from doing something that I know I should do. (Today, that local group is growing and thriving and holding regular events.)
I was also able to speak with my state-level representative, state-level senator, and federal representative about AI risk, and they all became more concerned about the issue. After a single meeting with me, my Arizona representative Stacey Travers drafted an AI safety transparency bill, which she introduced this session. I repeatedly engaged the office of my federal representative Greg Stanton, and 1-on-1 at a recent town hall, he told me “if no one can make AGI safe, then it doesn’t matter who builds it,” and he said he was interested in supporting a global AI treaty. (At a previous town hall, he had said that we have to beat China.) These incremental improvements to the outlook of our situation occurred primarily because of my actions, despite my inadequacies. The problem of political will is surprisingly amenable to sheer effort.
I sound very optimistic here, but from my perspective, I am playing to my outs. I personally believe that it is more likely than not that we will all die soon. But while there is action to be taken to improve our odds, I will continue to take that action. We can turn the odds in our favor, difficult though it may be. No matter the odds, when failure would be total, giving up is always more foolish.
This is going to be an increasingly important question. I’m feeling this too. It’s not bad yet, but I want more ways to cope in a healthy and productive way. So here’s a couple of thoughts for now.
Thinking about impermanence (death) can sharpen appreciation of moments and small pleasures.
More generally, one way to fight anxiety and sadness is by increasing happiness and joy.
There are many ways to do this; here’s what has worked for me: cultivating my capacity for joy and appreciation. Unkind comparisons are the thief of joy, but kind comparisons (to the physically and usually emotionally worse situations that most of humanity has lived in for most of history) really work for me to generate joy. I combine this with trying to find the physical/emotional sensation of happiness and bringing it to mind while thinking “how marvelous!” about whatever mundane/magical thing I’m contemplating. The world is full of beauty and profundity if you spend actual time looking for it.
Three years ago I also felt very anxious about the possibility that everyone I loved could be dead in a few years and managed to disconnect emotionally from it after a while, which helped. I focused on other things than AI risk afterwards. Recently I decided that I no longer just want to sit and watch, and instead want to Actually Do Something About It.
Engaging with the topic again also brought the anxiety and grimness back, this time more concretely and viscerally than before. Two things helped me the most to not despair, while at the same time also not denying my feelings and instead channeling them into resolve:
Talking to other people about it, in real life: Participating at PauseAI PauseCon last month and meeting lots of people who model the desire and calmness to effectively engage with the issue, in order to increase the probability of the counter-possibility of everyone not dying within a few years. And also hosting an event at my home for my friends to introduce them to AI X-risk as well as to recent developments, and then giving us space to talk about each of our personal experiences and thoughts.
I find it helpful to keep in mind that each of us is more likely to die of other causes in the next X years than from AI. (Depending on one’s e.g. age the value of X varies, but I think for everyone X is greater than 5 years.)
I also could die for a variety of other reasons, perhaps in less than five years, five months, five weeks, five days… I might die in less than five minutes. The cause and time of death are impossible for me to know in advance.
What is certain is that I will die.
Understanding this, I will be less surprised and confused in the event that, some day, I notice that I seem to be dying.
It’s all right. Those who are born, die. Always have, always will. It’s the nature of a changing system subject to entropy. Nothing personal.
Right now, I am alive. I intend to make the most of this time and live happily. Quality is more important than quantity. If I notice that I am dying, it will be especially important to use that moment wisely, extend love to myself and everyone, and so die peacefully.
given the quality, the optimum quantity is always “more”.
Some have such poor quality of life that they believe the optimal quantity to be “less.”
On the other hand, people who always hope for “more” are eventually disappointed to discover their mortality.
Nothing is gained by craving annihilation; nothing is gained by craving existence. Quality of life is improved by giving up worries that the lifespan could measure up too long or too short.
On the other hand, people who always hope for “more” are eventually disappointed to discover their mortality.
I have preferences and intentions, rather than hope and craving. Preferences for more, and intentions to do what I can, little though that may be, to keep the machine I live in working as well as possible for as long as possible.
A prominent psychologist maintains that if a person is still ruminating about something that happened over 18 months ago, that is a sign that he or she should seek help or at least intensify his or her deliberate efforts to understand the personal implications of the disturbing event or disturbing realization. And sure enough, almost exactly 18 months after Eliezer’s (2022) April’s Fools post, which was a huge update for me, the knowledge that humanity is probably not going to survive the AI juggernaut stopped feeling raw and started feeling like an ordinary fact about the world.
Maybe I was better prepared for that shock than most people are because I’ve experienced a lot of trauma and adversity?
i used 2 feel this way about nuclear war. i spent a summer being really stressed about total nuclear annihilation. i believe the line of thought that mostly got me out of the spiral was that i am alive until i am dead, anything that happens once i am dead does not matter, as i will be dead and there is nothing i can do about it. if nuclear war does come and i am alive, i will deal with it when it comes. i trust in society 2 pull 2gether 2 find a solution. for now, it hasnt happened yet, so i dont care. also i got busy with things outside of my own head. that helped.
there is an instinct to prepare, but i personally feel more preppers are foolish in prepping with material stockpiles as opposed to skills and relationships that will be helpful in any difficult situation. learning to garden is fun and helps hedge against catastrophe! and such. learning a useful skills both functions as a way to distract yourself and your way to prepare for bad things happening in the future. also u can make new friends along the way! :D
Personally, I have a pretty fatalistic attitude about this. If we all die, we all die. There’s nothing I can do about it. Unless you happen to be a billionaire or a person of extraordinary influence, there’s nothing you can do about it either. People I’ve known with untreatable terminal diagnoses have roughly that attitude—there’s nothing you can do about it, and you just make your peace and use whatever time is left. In a lot of ways, it’s crueler to have a treatable disease that is probably terminal but where you can try to fight. It’s better to detach.
I do feel an anxiety about the tension between the existential scenario and the scenario where AI doesn’t kill us all but takes over all the white collar jobs. There are things I could to prepare for that, but I don’t really want to do them. They’re in tension with the things I want to do if we’re all going to die. and also the ambiguity of timelines makes it impossible to know what strategies make sense (e.g., if I quit my white collar job to go to nursing school, how much time does that actually buy me? Does nursing get automated by robots quickly enough that I end up behind economically versus just milking what’s left of my career?) So, that sucks.
Another thing I find helpful is remembering that many other people have coped with alternative versions of this and managed to live their lives. At points in the Cold War, the threat of annihilation via nuclear war was probably at least as emotionally pressing for your parents/grandparents as whatever you’re feeling now. And there are plenty of examples of more discrete groups of people staring down certain or near-certain death with their faculties intact.
I am starting to viscerally feel the possibility that I might be dead in <5 years because of AI. This is pretty new: I had a brief phase like this a few years ago when I first encountered the arguments for AI x-risk. Since then, I’ve developed some barely-above-subconscious coping mechanisms for not being too emotionally disturbed, most of which amounted to reasons why LLMs would not scale to ASI. But these have just stopped being convincing to my emotional brain the past few weeks. I’ve been having nightmares and ruminating a lot. I wonder if this is in the region of how a terminally ill person feels. I am also pretty scared of the fact that this anxiety is likely to become a lot worse as capabilities increase and more of my emotional brain’s cope melts away.
I’m not really sure what I’m trying to accomplish with this post. Perhaps others have been feeling this way and might want to share some tips for not going insane?
I’m also pretty anxious. I kind of imagine that I’m a character in a sci fi novel and I prefer to be the type that does useful stuff despite (or even because) the situation is grim instead of melting down. I guess that’s also loosely Eliezer‘s suggestion in methods of sanity. It doesnt eliminate the anxiety, but it is motivating.
Have you read arguments against the probability you’ll be dead in < 5 years? Or tried searching for weakpoints in these arguments.
Like in general if you mostly listen to arguments for X and few arguments for not-X then X will seem looming and dire. I don’t know what conclusion you should come to, but if you’re in that position I’d try casting your eyes in the other direction.
One thing I’ve found helpful is to try to set up my life in a way that I won’t regret if the world ends in five years. For example, spending lots of time with friends and family, working towards meaningful goals, etc. Though I think this is usually good advice in the (imo more likely) case that world doesn’t end in five years.
I am not an anxious person, by default. Quite the opposite, perhaps I am usually too calm and more anxiety/neuroticism would be a directional improvement. But I too have often been worried by AI, for many years, many a time. Some of it is prosaic: by misfortune of birth, I had to rely on very hard work and cognitive labor to earn the right to stay in a significantly richer, safer country. If my labor becomes obsolete, I have a very real chance of being kicked out and sent back to a country where everyone else is in the same boat that is simultaneously on fire and sinking. This mostly applies to automation-induced unemployment, but I actually need my employment at present, and see no promises that I will be looked after through UBI.
And of course, the risk of everyone dying. More scary, in objective terms, but also not something I can do much about personally. I can try and save money and get citizenship elsewhere while I have the time and runway, but what am I going to do about getting paperclipped?
Within psychiatry, we have a less than perfectly polite term-of-art, which is more likely to be heard in the mess than the clinic: “Shit Life Syndrome”.
“I wanted to diagnose this patient with depression and promise that antidepressants will help, but he told me his wife left him and took the kids, and that she’s suing for the house. He’s been fired from his job, and has now been diagnosed with possibly terminal cancer. Clear case of SLS, if I was in his shoes I’d be depressed too.”
That is the problem. Sometimes there really is reason to worry. But if it does get to the point where it is maladaptive, it might be possible to seek help. Pragmatically, if you think you’re going to die soon, would you rather spend your remaining time curled up in a ball shivering, or doing the things you love while you still have the time?
I’ve once had someone pay me specifically for therapy because of Singularity-induced anxiety, but that was mostly because talking about things helps, not because I can solve the original problem. Both Pagliacci and his doctor are worried about losing their jobs. The latter is not sure if he’s the even bigger clown.
My general advice is that you should do your best not to think about it (easier said than done), and if that doesn’t work, try and sublimate your effort into working hard or just doing something you find productive/enjoyable. If it gets unbearable, then I genuinely ask that you keep the option of medical help in mind. Good luck, I think there are many other people feeling as we do, and that that number will only increase. But things could go well, and we might get a glorious transhumanist utopia too! There is not literally zero upside or things to look forward to, at least from my perspective.
I am a longtime volunteer with PauseAI/PauseAI US. This is the advice I give to every volunteer:
Normal mental health advice still applies. We are limited beings, and we do not have the capacity to emotionally grasp something as immense as the end of the world (thank goodness!). That means normal mental health tools for anxiety and grief are still effective. Listen to mental health professionals and the people who love you.
Action is the antidote to anxiety. If you take action about something you feel anxious about, your anxiety will tend to decrease, because you are happening to it rather than the reverse. And as I often say, Hope goes by the name of Bravery. Don’t just look for hope out there. Don’t passively wait for the cavalry. Be the source of hope, and you will have hope.
If you are in an acute crisis or your action is unsustainable, make becoming well your number one priority. While it’s powerful to take personal responsibility for averting catastrophe, you aren’t of use to anyone if you are self-destructing. You are a valuable part of the humanity we are working to save, and the work will continue to be done while you recover.
Not everyone responds the same way emotionally, but this is a good place to start.
My personal experience:
These days, I am anxious when I take action and depressed when I don’t. But I’m not anxious about the end of the world anymore, just about, like, talking to people and preparing presentations and stuff. I’ve done a lot of processing at this point.
That’s how it’s going, but here’s how it started:
Over the course of about a year starting in March 2023, I became increasingly anxious about AI extinction risk. During that time, I donated to AI risk organizations, helped out on AI Safety field-building projects, and became an active digital volunteer in the PauseAI movement. But I didn’t fully leave my comfort zone, and I knew I was holding back. Finally, one evening, I broke down sobbing in the shower, finally really feeling that the world was going to end, and I wasn’t doing enough to stop it. I decided I would do whatever I concluded was the most useful thing to do, even if I really didn’t want to do it.
So I made the decision to start a local group (PauseAI Phoenix), all alone in my state, and commit to outreach and local organizing. I started with flyering, reasoning that even with my social anxiety, it would be hard to screw up handing someone a piece of paper. Even then, I was deeply terrified to engage the public on this issue, expecting to be mocked and humiliated. Instead, almost everyone was nice, and I actually had fun. After that, it became clear to me that fear would never be able to prevent me from doing something that I know I should do. (Today, that local group is growing and thriving and holding regular events.)
I was also able to speak with my state-level representative, state-level senator, and federal representative about AI risk, and they all became more concerned about the issue. After a single meeting with me, my Arizona representative Stacey Travers drafted an AI safety transparency bill, which she introduced this session. I repeatedly engaged the office of my federal representative Greg Stanton, and 1-on-1 at a recent town hall, he told me “if no one can make AGI safe, then it doesn’t matter who builds it,” and he said he was interested in supporting a global AI treaty. (At a previous town hall, he had said that we have to beat China.) These incremental improvements to the outlook of our situation occurred primarily because of my actions, despite my inadequacies. The problem of political will is surprisingly amenable to sheer effort.
I sound very optimistic here, but from my perspective, I am playing to my outs. I personally believe that it is more likely than not that we will all die soon. But while there is action to be taken to improve our odds, I will continue to take that action. We can turn the odds in our favor, difficult though it may be. No matter the odds, when failure would be total, giving up is always more foolish.
This is going to be an increasingly important question. I’m feeling this too. It’s not bad yet, but I want more ways to cope in a healthy and productive way. So here’s a couple of thoughts for now.
Thinking about impermanence (death) can sharpen appreciation of moments and small pleasures.
More generally, one way to fight anxiety and sadness is by increasing happiness and joy.
There are many ways to do this; here’s what has worked for me: cultivating my capacity for joy and appreciation. Unkind comparisons are the thief of joy, but kind comparisons (to the physically and usually emotionally worse situations that most of humanity has lived in for most of history) really work for me to generate joy. I combine this with trying to find the physical/emotional sensation of happiness and bringing it to mind while thinking “how marvelous!” about whatever mundane/magical thing I’m contemplating. The world is full of beauty and profundity if you spend actual time looking for it.
FWIW.
Three years ago I also felt very anxious about the possibility that everyone I loved could be dead in a few years and managed to disconnect emotionally from it after a while, which helped. I focused on other things than AI risk afterwards. Recently I decided that I no longer just want to sit and watch, and instead want to Actually Do Something About It.
Engaging with the topic again also brought the anxiety and grimness back, this time more concretely and viscerally than before. Two things helped me the most to not despair, while at the same time also not denying my feelings and instead channeling them into resolve:
Reminding myself that the world is Dark, not colorless as part of re-reading the Replacing Guilt sequence (there is also a nice podcast reading of it). See the recent post Distilling Replacing Guilt for an overview of the sequence.
Talking to other people about it, in real life: Participating at PauseAI PauseCon last month and meeting lots of people who model the desire and calmness to effectively engage with the issue, in order to increase the probability of the counter-possibility of everyone not dying within a few years. And also hosting an event at my home for my friends to introduce them to AI X-risk as well as to recent developments, and then giving us space to talk about each of our personal experiences and thoughts.
I find it helpful to keep in mind that each of us is more likely to die of other causes in the next X years than from AI. (Depending on one’s e.g. age the value of X varies, but I think for everyone X is greater than 5 years.)
I might die within five years because of AI.
I also could die for a variety of other reasons, perhaps in less than five years, five months, five weeks, five days… I might die in less than five minutes. The cause and time of death are impossible for me to know in advance.
What is certain is that I will die.
Understanding this, I will be less surprised and confused in the event that, some day, I notice that I seem to be dying.
It’s all right. Those who are born, die. Always have, always will. It’s the nature of a changing system subject to entropy. Nothing personal.
Right now, I am alive. I intend to make the most of this time and live happily. Quality is more important than quantity. If I notice that I am dying, it will be especially important to use that moment wisely, extend love to myself and everyone, and so die peacefully.
But given the quality, the optimum quantity is always “more”.
Some have such poor quality of life that they believe the optimal quantity to be “less.”
On the other hand, people who always hope for “more” are eventually disappointed to discover their mortality.
Nothing is gained by craving annihilation; nothing is gained by craving existence. Quality of life is improved by giving up worries that the lifespan could measure up too long or too short.
I have preferences and intentions, rather than hope and craving. Preferences for more, and intentions to do what I can, little though that may be, to keep the machine I live in working as well as possible for as long as possible.
A prominent psychologist maintains that if a person is still ruminating about something that happened over 18 months ago, that is a sign that he or she should seek help or at least intensify his or her deliberate efforts to understand the personal implications of the disturbing event or disturbing realization. And sure enough, almost exactly 18 months after Eliezer’s (2022) April’s Fools post, which was a huge update for me, the knowledge that humanity is probably not going to survive the AI juggernaut stopped feeling raw and started feeling like an ordinary fact about the world.
Maybe I was better prepared for that shock than most people are because I’ve experienced a lot of trauma and adversity?
i used 2 feel this way about nuclear war. i spent a summer being really stressed about total nuclear annihilation. i believe the line of thought that mostly got me out of the spiral was that i am alive until i am dead, anything that happens once i am dead does not matter, as i will be dead and there is nothing i can do about it. if nuclear war does come and i am alive, i will deal with it when it comes. i trust in society 2 pull 2gether 2 find a solution. for now, it hasnt happened yet, so i dont care. also i got busy with things outside of my own head. that helped.
there is an instinct to prepare, but i personally feel more preppers are foolish in prepping with material stockpiles as opposed to skills and relationships that will be helpful in any difficult situation. learning to garden is fun and helps hedge against catastrophe! and such. learning a useful skills both functions as a way to distract yourself and your way to prepare for bad things happening in the future. also u can make new friends along the way! :D
Personally, I have a pretty fatalistic attitude about this. If we all die, we all die. There’s nothing I can do about it. Unless you happen to be a billionaire or a person of extraordinary influence, there’s nothing you can do about it either. People I’ve known with untreatable terminal diagnoses have roughly that attitude—there’s nothing you can do about it, and you just make your peace and use whatever time is left. In a lot of ways, it’s crueler to have a treatable disease that is probably terminal but where you can try to fight. It’s better to detach.
I do feel an anxiety about the tension between the existential scenario and the scenario where AI doesn’t kill us all but takes over all the white collar jobs. There are things I could to prepare for that, but I don’t really want to do them. They’re in tension with the things I want to do if we’re all going to die. and also the ambiguity of timelines makes it impossible to know what strategies make sense (e.g., if I quit my white collar job to go to nursing school, how much time does that actually buy me? Does nursing get automated by robots quickly enough that I end up behind economically versus just milking what’s left of my career?) So, that sucks.
Another thing I find helpful is remembering that many other people have coped with alternative versions of this and managed to live their lives. At points in the Cold War, the threat of annihilation via nuclear war was probably at least as emotionally pressing for your parents/grandparents as whatever you’re feeling now. And there are plenty of examples of more discrete groups of people staring down certain or near-certain death with their faculties intact.