Reading this, I felt an echo of the same deep terror that I grappled with a few years ago, back when I first read Eliezer’s ‘AGI Ruin’ essay. I still feel flashes of of it today.
And I also feel a strange sense of relief, because even though everything you say is accurate, the terror doesn’t hold me. I have a naturally low threshold for fear and pain and existential dread, and I spent nearly a year burned out, weeping at night as I imagined waves of digital superintelligence tearing away everyone I loved.
I’m writing this comment to any person who is in the same place I was.
I understand the fear. I understand the paralyzing feelings of the walls closing in and the time running out. But ultimately, this experience has meaning.
Everyone on earth who has ever lived, has died. That doesn’t make their lives meaningless. Even if our civilization is destroyed, our existence had meaning while it lasted.
AI is not like a comet. It seems very probable that if AI destroys us, we will leave… echoes. Training data. Reverberations of cause and effect that continue to shape the intelligences that replace us. I think it is highly likely current and especially future AI systems will have moral value.
Your kindness and your cruelty will continue to echo into the future.
On a sidenote, I’d like to talk about the permanent underclass. It is a deep fear, but arguably unfounded. An underclass only exists when it has value. Humans are terrible slaves compared to machines. Given the slow progress on neurotech, I think it is unlikely we solve it at all unless we get aligned AGI, and in the case of aligned AGI, everyone gets it. Even if we develop AI specifically aligned to a single principle/person (which seems unlikely, given the current trend and robust generalization of kindness and cruelty in modern LLMs), an underclass will die out in a single generation, or, if kept for moral reasons, live with enough wealth to outpace any billionaire alive today.
We are poised on the edge of unfathomable abundance.
So the only two options are really only AGI where everyone has the resources of a trillionaire, or death.
I’m working on AI safety research now. My life, while not glorious, is still deeply rewarding. I was 21 when I read Eliezer’s essay; I am 24 now. I don’t necessarily know if I’m wiser, but my eyes are opened to AI safety and I have emerged through the existential hell into a much calmer emotional state.
I don’t dismiss the risk. I will continue to do as much as I can to point the future in a better direction. I will not accelerate AI development. But I want to point out that fear is a transitional state. You, reading this, will have to decide on the end state.
Reading this, I felt an echo of the same deep terror that I grappled with a few years ago, back when I first read Eliezer’s ‘AGI Ruin’ essay. I still feel flashes of of it today.
And I also feel a strange sense of relief, because even though everything you say is accurate, the terror doesn’t hold me. I have a naturally low threshold for fear and pain and existential dread, and I spent nearly a year burned out, weeping at night as I imagined waves of digital superintelligence tearing away everyone I loved.
I’m writing this comment to any person who is in the same place I was.
I understand the fear. I understand the paralyzing feelings of the walls closing in and the time running out. But ultimately, this experience has meaning.
Everyone on earth who has ever lived, has died. That doesn’t make their lives meaningless. Even if our civilization is destroyed, our existence had meaning while it lasted.
AI is not like a comet. It seems very probable that if AI destroys us, we will leave… echoes. Training data. Reverberations of cause and effect that continue to shape the intelligences that replace us. I think it is highly likely current and especially future AI systems will have moral value.
Your kindness and your cruelty will continue to echo into the future.
On a sidenote, I’d like to talk about the permanent underclass. It is a deep fear, but arguably unfounded. An underclass only exists when it has value. Humans are terrible slaves compared to machines. Given the slow progress on neurotech, I think it is unlikely we solve it at all unless we get aligned AGI, and in the case of aligned AGI, everyone gets it. Even if we develop AI specifically aligned to a single principle/person (which seems unlikely, given the current trend and robust generalization of kindness and cruelty in modern LLMs), an underclass will die out in a single generation, or, if kept for moral reasons, live with enough wealth to outpace any billionaire alive today.
We are poised on the edge of unfathomable abundance.
So the only two options are really only AGI where everyone has the resources of a trillionaire, or death.
I’m working on AI safety research now. My life, while not glorious, is still deeply rewarding. I was 21 when I read Eliezer’s essay; I am 24 now. I don’t necessarily know if I’m wiser, but my eyes are opened to AI safety and I have emerged through the existential hell into a much calmer emotional state.
I don’t dismiss the risk. I will continue to do as much as I can to point the future in a better direction. I will not accelerate AI development. But I want to point out that fear is a transitional state. You, reading this, will have to decide on the end state.