I went into medicine because medicine is applied transhumanism.
Most of my colleagues would object strenuously to this characterization, and I think they are wrong. They spend their days fighting disease, clawing back life-years from the void, chemically and surgically overriding the factory defaults of the human body—and yet the word “transhumanism” would make many of them recoil as though I had said something vaguely embarrassing at a dinner party. There is a certain type of person who will happily do the thing while refusing, on aesthetic grounds, to endorse the philosophy behind the thing. I do not begrudge them this, and they outnumber me.
(Not everyone is a high decoupler, or an adherents of radical internal consistency and coherence.)
I was always one of the people who endorsed the philosophy behind the thing. I grew up dreaming about genetic engineering, cybernetic augmentation, and the eventual abolition of aging. I had the full card-carrying package. The only reason I was not particularly anxious about AI timelines is that I did not have AI timelines—or rather, my implicit AI timelines were “late 21st century, someone else’s problem,” which is not really a timeline so much as a polite way of not thinking about it. I did not have such urgent need to think about, even if I started well before it was hip.
Then, somewhat ahead of schedule, the timeline arrived. I think 2022 was when I went from concern to “oh fuck, we’re really trying to make AGI, aren’t we?”
Here is where I am supposed to be distressed. And I am, a little, but perhaps not about the things you would expect. The dreams of genetic engineering and cybernetic augmentation were always instrumental—means toward the end of not dying, of having more cognitive capacity than a three-pound organ optimized for Pleistocene conditions can provide, of becoming, in some meaningful sense, more. If AGI and then ASI arrive and are aligned (I am aware this is a large “if,” possibly the largest “if” in the history of human sentences), they get me to the same destination considerably faster than the biological route would have. I find I am not especially mourning the scenic path.
I am not wedded to the idea of being human, or becoming just a little bit better than human, or just a lot better than human. I want to become the kind of entity that takes up most of a Matrioshka Brain. You can’t make one out of meat.
So I am willing to shed the flesh as soon as shedding it becomes feasible, which puts me in a minority even among people who would self-identify as transhumanists. Most people, it turns out, want to be enhanced humans rather than post-humans. They want to keep the architecture and upgrade the components. I understand the appeal. I just do not share it strongly enough to treat it as a constraint. The part of me I care most about preserving and enhancing is computational, it does not care about biology as a privileged substrate.
What does cause me distress is the perceived risk of our current path killing me, and maybe everone else. If you want a p(doom), it hovers around 20% these days, down from a peak of 30%. Not great, not terrible.
We could all die. Failure and death is always an option. I think about it with the particular emotional register of someone who has accepted a thing without having made peace with it. You can accept the actuarial tables without being happy about them.
I can’t do much about it, but I refuse to learn more helplessness than is strictly necessary. The thing about feeling like an actor in a history you cannot change: it does not actually follow that you should stop acting. Nothing I say or do will determine the outcome of the next decade in any individually legible way. This is also true of voting, and of keeping in shape, and of most of the things humans do that we nonetheless consider worthwhile. Super-rationality can be distinct from individual rationality. I will try anyway.
We might have become immortal and made Dyson Swarms anyway, with only minimally augmented human brains at our disposal. We are a capable species. It might have taken much longer. Oh well, as long as AGI and ASI are aligned, I’m happy. I just note that is a very big “if”.
I think when transhumanists say they want to be enhanced humans rather than post-humans they mean it is a statement about current immediate desires, not about long-term destiny. I think I want, eventually, to become a galaxy-brain, but I want this procees to be, to a degree, gradual, because I think there are different things to enjoy on different steps of that process, and also making me a galaxy-brain right now is equivalent to just killing me and replacing me with some completely different entity.
In other words, I want to enjoy the ride, not only the final destination, although I do care about the final destination.
I want this procees to be, to a degree, gradual, because I think there are different things to enjoy on different steps of that process, and also making me a galaxy-brain right now is equivalent to just killing me and replacing me with some completely different entity.
I’d like to know more about this viewpoint—is there any literature you recommend?
I went into medicine because medicine is applied transhumanism.
Most of my colleagues would object strenuously to this characterization, and I think they are wrong. They spend their days fighting disease, clawing back life-years from the void, chemically and surgically overriding the factory defaults of the human body—and yet the word “transhumanism” would make many of them recoil as though I had said something vaguely embarrassing at a dinner party. There is a certain type of person who will happily do the thing while refusing, on aesthetic grounds, to endorse the philosophy behind the thing. I do not begrudge them this, and they outnumber me.
(Not everyone is a high decoupler, or an adherents of radical internal consistency and coherence.)
I was always one of the people who endorsed the philosophy behind the thing. I grew up dreaming about genetic engineering, cybernetic augmentation, and the eventual abolition of aging. I had the full card-carrying package. The only reason I was not particularly anxious about AI timelines is that I did not have AI timelines—or rather, my implicit AI timelines were “late 21st century, someone else’s problem,” which is not really a timeline so much as a polite way of not thinking about it. I did not have such urgent need to think about, even if I started well before it was hip.
Then, somewhat ahead of schedule, the timeline arrived. I think 2022 was when I went from concern to “oh fuck, we’re really trying to make AGI, aren’t we?”
Here is where I am supposed to be distressed. And I am, a little, but perhaps not about the things you would expect. The dreams of genetic engineering and cybernetic augmentation were always instrumental—means toward the end of not dying, of having more cognitive capacity than a three-pound organ optimized for Pleistocene conditions can provide, of becoming, in some meaningful sense, more. If AGI and then ASI arrive and are aligned (I am aware this is a large “if,” possibly the largest “if” in the history of human sentences), they get me to the same destination considerably faster than the biological route would have. I find I am not especially mourning the scenic path.
I am not wedded to the idea of being human, or becoming just a little bit better than human, or just a lot better than human. I want to become the kind of entity that takes up most of a Matrioshka Brain. You can’t make one out of meat.
So I am willing to shed the flesh as soon as shedding it becomes feasible, which puts me in a minority even among people who would self-identify as transhumanists. Most people, it turns out, want to be enhanced humans rather than post-humans. They want to keep the architecture and upgrade the components. I understand the appeal. I just do not share it strongly enough to treat it as a constraint. The part of me I care most about preserving and enhancing is computational, it does not care about biology as a privileged substrate.
What does cause me distress is the perceived risk of our current path killing me, and maybe everone else. If you want a p(doom), it hovers around 20% these days, down from a peak of 30%. Not great, not terrible.
We could all die. Failure and death is always an option. I think about it with the particular emotional register of someone who has accepted a thing without having made peace with it. You can accept the actuarial tables without being happy about them.
I can’t do much about it, but I refuse to learn more helplessness than is strictly necessary. The thing about feeling like an actor in a history you cannot change: it does not actually follow that you should stop acting. Nothing I say or do will determine the outcome of the next decade in any individually legible way. This is also true of voting, and of keeping in shape, and of most of the things humans do that we nonetheless consider worthwhile. Super-rationality can be distinct from individual rationality. I will try anyway.
We might have become immortal and made Dyson Swarms anyway, with only minimally augmented human brains at our disposal. We are a capable species. It might have taken much longer. Oh well, as long as AGI and ASI are aligned, I’m happy. I just note that is a very big “if”.
I think when transhumanists say they want to be enhanced humans rather than post-humans they mean it is a statement about current immediate desires, not about long-term destiny. I think I want, eventually, to become a galaxy-brain, but I want this procees to be, to a degree, gradual, because I think there are different things to enjoy on different steps of that process, and also making me a galaxy-brain right now is equivalent to just killing me and replacing me with some completely different entity.
In other words, I want to enjoy the ride, not only the final destination, although I do care about the final destination.
I’d like to know more about this viewpoint—is there any literature you recommend?
I think the most recognizable intro is Fun Theory by Yudkowsky, which is featured right now on the LW website by the way.
Thanks for your help, I had looked at this before but for some reason missed most of the writings.