The basic argument for the feasibility of transhumanism

Eliezer sometimes talks about how animals on earth are but a tiny dot in the “mind design space.” For example, in “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” he writes:

The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-ingeneral. The entire map floats in a still vaster space, the space of optimization processes. Natural selection creates complex functional machinery without mindfulness; evolution lies inside the space of optimization processes but outside the circle of minds.

Though Eliezer doesn’t stress this point, this argument applies as much to biotechnology as Artificial Intelligence. You could say, paralleling Eliezer, that when we talk about “biotechnology” we are really talking about living things in general, because life on Earth represents just a tiny subset of all life that could have evolved anywhere in the universe. Biotechnology may allow to create some of that life that could have evolved but didn’t. Extending the point, there’s probably an even vaster space of life that’s recognizably life but couldn’t have evolved, because it exists in a tiny island of life not connected to other possible life by a chain of small, beneficial mutations, and therefore is effectively impossible to reach without the conscious planning of a bioengineer.

The argument can further be extended to nanotechnology. Nanotechnology is like life in that they both involve doing interesting things with complex arrangements of matter on a very small scale, it’s just that visions of nanotechnology tend to involve things which don’t otherwise look very much like life at all. So we’ve got this huge space of “doing interesting this with complex arrangements of matter on a very small scale,” of which existing life on earth is a tiny, tiny fraction, and in which “Artificial Intelligence,” “biotechnology,” and so on represent much large subsets.

Generalized in this way, this argument seems to me to be an extremely important one, enough to make it a serious contender for the title “the basic argument for the feasibility* of transhumanism.” It suggests a vast space of unexplored possibilities, some of which would involve life on earth being very different than it is right now. Short of some catastrophe putting a halt to scientific progress, it seems hard to imagine how we could avoid having some significant changes of this sort not taking place, even without considering specifics involving superhuman AI, mind uploading, and so on.

On Star Trek, this outcome is avoided because a war with genetically enhanced supermen led to the banning of genetic enhancement, but in the real world such regulation is likely to be far from totally effective, no more than current bans on recreational drugs, performance enhancers, or copyright violation are totally effective. Of course, the real reason for the genetic engineering ban on Star Trek is that stories about people fundamentally like us are easier for writers to write and viewers to relate to.

I could ramble on about this for some time, but my reason for writing this post is to bounce ideas off people. In particular:

  1. Is there a better candidate for the title “the basic argument for the feasibility of transhumanism”?--and--

  2. What objections can be raised against this argument? I’m looking both for good objections and objections that many people are likely to raise, even if they aren’t really any good.

*I don’t call it an argument for transhumanism, because transhumanism is often defined to involve claims about the desirability of certain developments, which this argument doesn’t show anything about one way or the other.)