Most people on LW, and even most people in the US, are in favor of disease eradication, radical life extension, reduction of pain and suffering. A significant proportion (although likely a minority) are in favor of embryo selection or gene editing to increase intelligence and other desirable traits. I am also in favor of all these things. However, endorsing this form of generally popular transhumanism does not imply that one should endorse humanity’s succession by non-biological entities. Human “uploads” are much riskier than any of the aforementioned interventions—how do we know if we’ve gotten the upload right, how do we make the environment good enough without having to simulate all of physics? Successors that are not based on human emulation are even worse. Deep learning based AIs are detached from the lineage of humanity in a clear way and are unlikely to resemble us internally at all. If you want your descendants to exist (or to continue existing yourself), deep learning based AI is no equivalent.
Succession by non-biological entities is not a natural extension of “regular” transhumanism. It carries altogether new risks and in my opinion would almost certainly go wrong by most current people’s preferences.
The term “posthumanism” is usually used to describe “succession by non-biological entities”, for precisely the reason that it’s a distinct concept, and a distinct philosophy, from “mere” transhumanism.
(For instance, I endorse transhumanism, but am not at all enthusiastic about posthumanism. I don’t really have any interested in being “succeeded” by anything.)
I find this position on ems bizarre. If the upload acts like a human brain, and then also the uploads seem normalish after interacting with them a bunch, I feel totally fine with them.
I also am more optimistic than you about creating AIs that have very different internals but that I think are good successors, though I don’t have a strong opinion.
I am not philosophically opposed to ems, I just think they will be very hard to get right (mainly because of the environment part—the em will be interacting with a cheap downgraded version of the real world). I am willing to change my mind on this. I also don’t think we should avoid building ems, but I think it’s highly unlikely an em life will ever be as good as or equivalent to a regular human life so I’d not want my lineage replaced with ems.
In contrast to my point on ems, I do think we should avoid building AIs whose main purpose is to be equivalent to (or exceed) humans in “moral value”/pursue anything that resembles building “AI successors”. Imo the main purpose of AI alignment should be to ensure AIs help us thrive and achieve our goals rather than to attempt to embed our “values” into AIs with the goal of promoting our “values” independently of our existence. (Values is in scare quotes because I don’t think there’s such a thing as human values—individuals differ a lot in their values, goals, and preferences.)
Would you be convinced if you talked to the ems a bunch and they reported normal, happy, fun lives? (Assuming nothing nefarious happened in terms of e.g. modifying their brains to report that.) I think I would find that very convincing. If you wouldn’t find that convincing, what would you be worried was missing?
I would find that reasonably convincing, yes (especially because my prior is already that true ems would not have a tendency to report their experiences in a different way from us).
i want drastically upgraded biology, potentially with huge parts of the chemical stack swapped out in ways I can only abstractly characterize now without knowing what the search over viable designs will output. but in place, without switching to another substrate. it’s not transhumanism, to my mind, unless it’s to an already living person. gene editing isn’t transhumanism, it’s some other thing; but shoes are transhumanism for the same reason replacing all my cell walls with engineered super-bio nanotech that works near absolute zero is transhumanism. only the faintest of clues what space an ASI would even be looking in to figure out how to do that, but it’s the goal in my mind for ultra-low-thermal-cost life. uploads are a silly idea, anyway, computers are just not better at biology than biology. anything you’d do with a computer, once you’re advanced enough to know how, you’d rather do by improving biology
computers are just not better at biology than biology. anything you’d do with a computer, once you’re advanced enough to know how, you’d rather do by improving biology
I share a similar intuition but I haven’t thought about this enough and would be interested in pushback!
it’s not transhumanism, to my mind, unless it’s to an already living person. gene editing isn’t transhumanism
You can do gene editing on adults (example). Also in some sense an embryo is a living person.
IMO the whole “upload” thing changes drastically depending on our understanding of consciousness and continuity of the self (which is currently nearly non-existent). It’s like teleportation—I would let neither that nor upload happen to me willingly unless someone was able to convincingly explain me how precisely are my qualia associated with my brain and how they’re going to move over (rather than just killing me and creating a different entity).
I don’t believe it’s impossible for an upload to be “me”. But I doubt it’d be as easy as simply making a scan of my synapses and calling it a day. If it is, and if that “me” is then also infinitely copiable, I’d be very ambivalent about it (given all the possible ways it could go horribly wrong—see this story or the recent animated show Pantheon for ideas).
So it’s definitely a “ok, but” position for me. Would probably feel more comfortable with a “replace my brain bit by bit with artificial functional equivalents” scenario as one that preserves genuine continuity of self.
I think a big reason why uploads may be much worse than regular life is not that the brain scan will be not good enough but that they won’t be able to interact with the real world like you can as a physical human.
Edit: I guess with sufficiently good robotics the ems would be able to interact with the same physical world as us in which case I would be much less worried.
I’d say even simply a simulated physical environment could be good enough to be indistinguishable. As Morpheus put it:
What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.
Of course, that would require insane amounts of compute, but so would a brain upload in the first place anyway.
I feel like this position is… flimsy? Unsubstantial? It’s not like I disagree, I don’t understand why you would want to articulate it in this way.
On the one hand, I don’t think biological/non-biological distinction is very meaningful from transhumanist perspective. Is embryo, genetically modified to have +9000IQ, going to be meaningfully considered “transhuman” instead of “posthuman”? Are you going to still be you after one billion years of life extension? “Keeping relevant features of you/humanity after enormous biological changes” seems to be qualitatively the same to “keeping relevant features of you/humanity after mind uploading”—i.e., if you know at gears-level what features of biological brains are essential to keep, you have rough understanding what you should work on in uploading.
On the other hand, I totally agree that if you don’t feel adventurous and you don’t want to save the world at price of your personality death, it would be a bad idea to undergo uploading in a way that closest-to-modern technology can provide. It just means that you need to wait for more technological progress. If we are in the ballpark of radical life extension, I don’t see any reason to not wait 50 years to perfect upload tech and I don’t see any reason why 50 years are not going to be enough, conditional on at least normally expected technical progress.
The same with AIs. If we have children, who are meaningfully different from us, and who can become even more different in glorious transhumanist future, I don’t see reasons to not have AI children, conditional on their designs preserving all important relevant features we want to see in our children. The problem is that we are not on track to create such designs, not conceptual existence of such designs.
And all said seems to be simply deducible/anticipated from concept of transhumanism, i.e., concept that the good future is the one filled with beings capable to meaningfully say that they were Homo Sapiens and stopped being Homo Sapiens at some point of their life. When you say “I want radical life extension” you immediately run into question “wait, am I going to be me after one billion years of life extension?” and you start The Way through all the questions about self-identity, essense of humanity, succession, et cetera.
I am going to post about biouploading soon – where the uploading is happened into (or via) a distributed net of my own biological neurons. This combines good things about uploading – immortality, ability to be copied, easy to repair, and good things about being biological human – preserving infinite complexity, exact sameness of a person, guarantee that the bioupload will have human qualia and any other important hidden things which we can miss.
Like with AGI, risks are a reason to be careful, but not a reason to give up indefinitely on doing it right. I think superintelligence is very likely to precede uploading (unfortunately), and so if humanity is allowed to survive, the risks of making technical mistakes with uploading won’t really be an issue.
I don’t see how this has anything to do with “succession” though, there is a world of difference between developing options and forcing them on people who don’t agree to take them.
The motte and bailey of transhumanism
Most people on LW, and even most people in the US, are in favor of disease eradication, radical life extension, reduction of pain and suffering. A significant proportion (although likely a minority) are in favor of embryo selection or gene editing to increase intelligence and other desirable traits. I am also in favor of all these things. However, endorsing this form of generally popular transhumanism does not imply that one should endorse humanity’s succession by non-biological entities. Human “uploads” are much riskier than any of the aforementioned interventions—how do we know if we’ve gotten the upload right, how do we make the environment good enough without having to simulate all of physics? Successors that are not based on human emulation are even worse. Deep learning based AIs are detached from the lineage of humanity in a clear way and are unlikely to resemble us internally at all. If you want your descendants to exist (or to continue existing yourself), deep learning based AI is no equivalent.
Succession by non-biological entities is not a natural extension of “regular” transhumanism. It carries altogether new risks and in my opinion would almost certainly go wrong by most current people’s preferences.
The term “posthumanism” is usually used to describe “succession by non-biological entities”, for precisely the reason that it’s a distinct concept, and a distinct philosophy, from “mere” transhumanism.
(For instance, I endorse transhumanism, but am not at all enthusiastic about posthumanism. I don’t really have any interested in being “succeeded” by anything.)
That makes sense, I just often see these ideas conflated in popular discourse.
I find this position on ems bizarre. If the upload acts like a human brain, and then also the uploads seem normalish after interacting with them a bunch, I feel totally fine with them.
I also am more optimistic than you about creating AIs that have very different internals but that I think are good successors, though I don’t have a strong opinion.
I am not philosophically opposed to ems, I just think they will be very hard to get right (mainly because of the environment part—the em will be interacting with a cheap downgraded version of the real world). I am willing to change my mind on this. I also don’t think we should avoid building ems, but I think it’s highly unlikely an em life will ever be as good as or equivalent to a regular human life so I’d not want my lineage replaced with ems.
In contrast to my point on ems, I do think we should avoid building AIs whose main purpose is to be equivalent to (or exceed) humans in “moral value”/pursue anything that resembles building “AI successors”. Imo the main purpose of AI alignment should be to ensure AIs help us thrive and achieve our goals rather than to attempt to embed our “values” into AIs with the goal of promoting our “values” independently of our existence. (Values is in scare quotes because I don’t think there’s such a thing as human values—individuals differ a lot in their values, goals, and preferences.)
Would you be convinced if you talked to the ems a bunch and they reported normal, happy, fun lives? (Assuming nothing nefarious happened in terms of e.g. modifying their brains to report that.) I think I would find that very convincing. If you wouldn’t find that convincing, what would you be worried was missing?
I would find that reasonably convincing, yes (especially because my prior is already that true ems would not have a tendency to report their experiences in a different way from us).
i want drastically upgraded biology, potentially with huge parts of the chemical stack swapped out in ways I can only abstractly characterize now without knowing what the search over viable designs will output. but in place, without switching to another substrate. it’s not transhumanism, to my mind, unless it’s to an already living person. gene editing isn’t transhumanism, it’s some other thing; but shoes are transhumanism for the same reason replacing all my cell walls with engineered super-bio nanotech that works near absolute zero is transhumanism. only the faintest of clues what space an ASI would even be looking in to figure out how to do that, but it’s the goal in my mind for ultra-low-thermal-cost life. uploads are a silly idea, anyway, computers are just not better at biology than biology. anything you’d do with a computer, once you’re advanced enough to know how, you’d rather do by improving biology
I share a similar intuition but I haven’t thought about this enough and would be interested in pushback!
You can do gene editing on adults (example). Also in some sense an embryo is a living person.
IMO the whole “upload” thing changes drastically depending on our understanding of consciousness and continuity of the self (which is currently nearly non-existent). It’s like teleportation—I would let neither that nor upload happen to me willingly unless someone was able to convincingly explain me how precisely are my qualia associated with my brain and how they’re going to move over (rather than just killing me and creating a different entity).
I don’t believe it’s impossible for an upload to be “me”. But I doubt it’d be as easy as simply making a scan of my synapses and calling it a day. If it is, and if that “me” is then also infinitely copiable, I’d be very ambivalent about it (given all the possible ways it could go horribly wrong—see this story or the recent animated show Pantheon for ideas).
So it’s definitely a “ok, but” position for me. Would probably feel more comfortable with a “replace my brain bit by bit with artificial functional equivalents” scenario as one that preserves genuine continuity of self.
I think a big reason why uploads may be much worse than regular life is not that the brain scan will be not good enough but that they won’t be able to interact with the real world like you can as a physical human.
Edit: I guess with sufficiently good robotics the ems would be able to interact with the same physical world as us in which case I would be much less worried.
I’d say even simply a simulated physical environment could be good enough to be indistinguishable. As Morpheus put it:
Of course, that would require insane amounts of compute, but so would a brain upload in the first place anyway.
I feel like this position is… flimsy? Unsubstantial? It’s not like I disagree, I don’t understand why you would want to articulate it in this way.
On the one hand, I don’t think biological/non-biological distinction is very meaningful from transhumanist perspective. Is embryo, genetically modified to have +9000IQ, going to be meaningfully considered “transhuman” instead of “posthuman”? Are you going to still be you after one billion years of life extension? “Keeping relevant features of you/humanity after enormous biological changes” seems to be qualitatively the same to “keeping relevant features of you/humanity after mind uploading”—i.e., if you know at gears-level what features of biological brains are essential to keep, you have rough understanding what you should work on in uploading.
On the other hand, I totally agree that if you don’t feel adventurous and you don’t want to save the world at price of your personality death, it would be a bad idea to undergo uploading in a way that closest-to-modern technology can provide. It just means that you need to wait for more technological progress. If we are in the ballpark of radical life extension, I don’t see any reason to not wait 50 years to perfect upload tech and I don’t see any reason why 50 years are not going to be enough, conditional on at least normally expected technical progress.
The same with AIs. If we have children, who are meaningfully different from us, and who can become even more different in glorious transhumanist future, I don’t see reasons to not have AI children, conditional on their designs preserving all important relevant features we want to see in our children. The problem is that we are not on track to create such designs, not conceptual existence of such designs.
And all said seems to be simply deducible/anticipated from concept of transhumanism, i.e., concept that the good future is the one filled with beings capable to meaningfully say that they were Homo Sapiens and stopped being Homo Sapiens at some point of their life. When you say “I want radical life extension” you immediately run into question “wait, am I going to be me after one billion years of life extension?” and you start The Way through all the questions about self-identity, essense of humanity, succession, et cetera.
I am going to post about biouploading soon – where the uploading is happened into (or via) a distributed net of my own biological neurons. This combines good things about uploading – immortality, ability to be copied, easy to repair, and good things about being biological human – preserving infinite complexity, exact sameness of a person, guarantee that the bioupload will have human qualia and any other important hidden things which we can miss.
Like with AGI, risks are a reason to be careful, but not a reason to give up indefinitely on doing it right. I think superintelligence is very likely to precede uploading (unfortunately), and so if humanity is allowed to survive, the risks of making technical mistakes with uploading won’t really be an issue.
I don’t see how this has anything to do with “succession” though, there is a world of difference between developing options and forcing them on people who don’t agree to take them.