My Transhuman Dream

Since watching Ghost in the Shell (1995), I had the dream of self-modifying into a higher being.

The scene when Kusanagi (story spoiler)

merges with project 2501

is the most interesting scene that I have ever seen. Not because of the scene directly, but rather because the scene provides a springboard for thinking really interesting thoughts.

What if I was Kusanagi? What happens in

the merger? What would the experience of merging be like?

What would it be like afterward? Is Kusanagi still the same person afterward?

It did not take long before I was considering what could be achieved with controlled self-modification. What if you could self-modify however you wanted? I was not even thinking in the obvious direction of immortality. I was mainly thinking about how the algorithms in my brain could be changed. How could they be improved? What are the limits of optimality in this domain? Could you be never tired; always be motivated to do the best/​right thing? Could you be a system that is always “well”, without suffering, while still being able to perform effective optimization?

Around the same time, I came across the Suit Switch Scene from SOMA’s excellent story. That triggered some very interesting thoughts and important conceptual advancements, about what it would mean to upload yourself. And if you would still be you afterward. So I wouldn’t react anymore like Simon does in the ending (not sure if I ever would have). I understand that if I upload myself that then Johannes would exist two times. And “I” would not be the upload. Maybe you can replace all your neurons one step at a time with better artificial variants, or in some other way continuously transform yourself. But it is unclear if this would actually make a difference. We don’t understand consciousness after all.

After that something terrible happened. I sort of gave up on this dream. Not explicitly, I was never thinking about giving up, but rather I was starting to think that it would be impossible for me to ever be able to self-modify. So eventually I stopped thinking about it almost completely.

All that was in 2017, 5 years ago. Sometime before I got interested in AI alignment.

Self-modification was something I wanted for myself. In a sense, that was a very concrete thing, easy to relate to. It was all very self-centered. I did not even think about what this would imply for other people.

Compare this to the way I was thinking about why AI alignment is important. After reading The Moral Landscape by Sam Harris, I decided that my objective would now be to maximize positive experiences and minimize negative experiences of conscious beings. And after reading Superintelligence for the second time it seemed obvious, that getting AI right is the single most important thing in optimizing for that objective.

A few weeks ago I reconsidered transhumanism. The connection to AGI is so obvious that I am surprised that I did not notice it before, but of course, once you have an aligned AGI it would be trivial to become transhuman. I might have missed it because I was so focused on the other objective. I do still think that optimizing for positive experiences is the best goal to follow. It is underspecified of course, but I think it points roughly in the right direction, more than anything else I have ever come up with, or come across.

But sometimes the goal just seems very abstract, and that makes it harder to get energized by it. With transhumanism, it seems easier. The most likely path to transhumanism is through aligned AGI. So I have decided that I now consider it my primary goal to become transhuman. Not because I think that this is the most important thing, but because that seems to energize me more.

It might be that this is only a temporary effect and that soon transhumanism will energize me less, once the novelty is drained. But I think it is worth a try.

The great thing is, that making AI go well is the best thing to do for optimizing conscious experiences and for becoming transhuman alike. Actually, making AI go well seems to be the most important thing, almost irrespective of what your goal is. And if I were to succeed in becoming transhuman, besides doing all the amazing things, that this enables you to do, I predict that I would make my primary objective again the optimization of conscious experiences.

I expect that problems of motivation are easily solved, once you have full read and write access to yourself. It seems likely that you could do that just with empirical experimentation, without too much of a risk, even without understanding the brain. At least if you are careful and backup yourself. But of course, you should be able to quickly understand the brain, and get a good theory of self-modification, once you have an aligned AGI that helps you.

If there would be anything useful for me to do, with regards to the goal of optimizing conscious experiences, once there is an aligned AGI around is another question.

Being able to self-modify is the most interesting situation that I can imagine myself in. And for me, it is one of the most interesting things, if not the most interesting thing, to fantasize about. Although ironically once I could arbitrarily change what I find interesting, this might no longer be the case.

If reading this inclines you more towards transhumanism it is worth remembering that you can think too much about utopia.

One of the reasons why I am excited about transhumanism is that, similar to getting AI right, it would help you with literally everything. Everything that you would ever want to do, can be done better by becoming transhuman. Even if we decide that humans are just having “fun time” now, once we have an aligned AGI around, you could still have more of a “fun time” by becoming transhuman. And I am not only talking about scenarios some might call “hedonic pitfalls”, e.g. humanity just playing video games but everything that you could ever do in this domain.

Prompts:

  • How would humans handle the situation, where we have an aligned AGI? Likely human labor would become obsolete, even if you are an Emulations or transhuman in some other way. Unless we would specifically optimize against that.

  • What ways might there be to become transhuman without uploading/​duplicating yourself? (See the SOMA links in the text.)

  • Are our notions of death still good concepts, once you can backup yourself, duplicate yourself, cease to exist without pain and without fear?

    • Is there a better-suited concept for that situation?

  • In what way is thinking about self-modification helpful even now? After all, many things like learning might be best thought of as self-modifications.

No comments.