I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.
Oh, huh, umm. I certainly didn’t want to cause anyone panic attacks by writing that, though in retrospect I should have realized that it’s a bit of an information hazard.
I’m sorry.
If it’s any comfort, I feel that my arguments in that article are pretty plausible, but that predicting the future is such a difficult thing filled with unknown unknowns that the vast majority of “pretty plausible” predictions are going to be wrong.
That’s a bit of an oxymoron, but thanks for saying it. I’m calmer than I was in the morning and your argument seems less convincing also. I think the ‘singleton’ is the natural course of intelligent evolution, and fits the whole idea of AI.
The convincingness of an idea can depend very much on one’s mood. This is obvious in cases of clinical depression, but I think it is present in ordinary mental functioning as well. We tend to judge convincingness by narrative coherence rather than logic and evidence. The coherence is not just the coherence internal to the story, but its coherence with one’s own feelings and experiences. As the latter change, so does the convincingness of the story.
Hypothesis: Ideas that retain their convincingness in the long term do so not by being especially rigorously argued or supported by solid evidence, but by constituting a large enough, coherent enough story to crowd out influence from day to day experience. It is the experience that will be interpreted in the light of the story rather than the other way round.
Ideas that retain their convincingness in the long term do so not by being especially rigorously argued or supported by solid evidence, but by constituting a large enough, coherent enough story to crowd out influence from day to day experience
Ask most people what they imagine a better life and a better world might be, and they will rarely imagine anything more than the present evils removed. Less disease, less starvation, less drudgery, less killing, less oppression. Their positive vision is merely the opposite of these: more health, more food, more fun, more love, more freedom.
When cranked up to transhuman levels, this looks like no ignorance, instant access to all knowledge, no stupidity, unlimited intelligence, no disease, unlimited lifespan, no technological limits, unlimited technological superpower, less environmental cramping, expansion across the universe.
What will people do, when almost everything they currently do is driven by exactly those limits that it is the transhuman vision to eliminate? Think of everything you have done today—how much of it would a transhuman you in a transhuman world have done?
I got out of bed. Sleep? What need has a transhuman of sleep? I showered, unloaded the washing machine that had run overnight, ate breakfast. Surely these and a great deal more stand in the same relation to a transhuman life as the drudgery of a 13th century peasant does to my own. I am typing on a keyboard. A keyboard! How primitive! Later today I will have taiko practice. Practice? Surely we will download such skills, or build robots to do them for us? I value the physical exertion. Exertion? What need, when we are uploads using whatever physical apparatus we choose, which will always run flawlessly?
The vision usually looks like having machines to do our living for us, leaving us as mere epiphenomena of a world that runs itself. We might think that “we” are colonising the galaxy, while to any other species observing, we might just look like a madly expanding sphere of von Neumann machines, with no valuable personhood present. Such is the vision of Utopia that results from imagining the future as being the present, but better, extrapolated without bound.
The Fun Sequence (long version, short version) says a lot about what sort of thing makes for a genuine Utopia, but I don’t think it contains examples of a day in the life. Perhaps it cannot, any more than a 13th century peasant’s dreams could contain anything resembling the modern world. One attempt I saw, which I can’t now find, imagined (this is my interpretation of it, not the way it was presented) a future that amounted to better BDSM scenes. This strikes me as about as realistic as a million years of sex with catgirls.
What will people do, when almost everything they currently do is driven by exactly those limits that it is the transhuman vision to eliminate? Think of everything you have done today—how much of it would a transhuman you in a transhuman world have done?
What I would want to do, or what I think I would do? I certainly would want to hold on my values, but I’m not yet sure which ones.
I don’t see how you can just crank these very specialized phenomena several orders of magnitude higher and still remain remotely human. That’s the point of the essay- we wind up as something we would view as monstrous today.
I’ve had existential crises thinking about such things. Stuff like living forever or having my brain upgraded beyond recognition scare me, for reasons I can’t quite put into words.
I’m comforted by the argument that it won’t happen overnight. We will probably gradually transition into such a world and it won’t feel so weird and shocking. And if we get it right, the AI will ask us what we want, and present us with arguments for and against our options, so we can decide what we actually want. Not just get stuck in a shitty future we wouldn’t want.
I don’t know that there is a rebuttal. Wireheading goes all the way back to Homer:
They started at once, and went about among the Lotus-eaters, who did them no hurt, but gave them to eat of the lotus, which was so delicious that those who ate of it left off caring about home, and did not even want to go back and say what had happened to them, but were for staying and munching lotus with the Lotus-eaters without thinking further of their return; nevertheless, though they wept bitterly I forced them back to the ships and made them fast under the benches. Then I told the rest to go on board at once, lest any of them should taste of the lotus and leave off wanting to get home
--The Odyssey
The solution there seems to be not to do it in the first place. It has long been a theme of dystopian fiction that our technology will erode or destroy what it means to be human. Playing around with the brain is definitely going to change things, most likely in ways we can’t quite predict—not to mention any accidental damage caused by novel methods. The only rebuttal I can think of is that our current technology is too crude and barbarous to make such modifications worth the drawbacks.
I think there’s still some solace though. There’s always a reaction to technology that tries to become too invasive. Many are willing to use drugs to regulate their moods, but there’s also strong counter-pressure. New technology doesn’t spread overnight. We’ll have plenty of examples of brain-modified people before the methods become widespread. I’d like to think people will be able to decide whether it’s worth the cost before they jump headlong into it.
I don’t believe that it’s mainstream transhumanist thought, in part because most people who’d call themselves transhumanists have not been exposed to the relevant arguments.
Does that help? No?
The problem with this vision of the future is that it’s nearly basilisk-like in its horror. As you said, you had a panic attack; others will reject it out of pure denial that things can be this bad, or perform motivated cognition to find reasons why it won’t actually happen. What I’ve never seen is a good rebuttal.
If it’s any consolation, I don’t think the possibility really makes things that much worse. It constrains FAI design a little more, perhaps, but the no-FAI futures already looked pretty bleak. A good FAI will avoid this scenario right along with all the ones we haven’t thought of yet.
A few rebuttals. Race to the bottom only works in a universe where there is a reason to keep getting lower. where in a petri dish; if you don’t replicate fast enough you die; thats a strong selective pressure; in human-world reality there is no such pressure of equivalence. (I also disagree with rat-island) for this reason. Where there is not a benefit to get lower; it won’t happen. Evolution seems to happen by two main pressures; slow selection of the most-fittest and sudden selection by major environmental factors (with a range of both in between). For the sake of argument; the cutest of humans procreate; and the lesser ones—less so; but no one survives the next meteor strike. only the cockroaches which then have to grow under slow pressures until the next big pressure.
as for distinct mind—we (as humanity) would only go down the path of non-distinct mind if we wanted to. It may seem like a bad thing from our perspective now; but its a bit of a strawman argument that it will certainly be something we do not want when the time comes that it is possible to do so. I am not concerned for such far-away situations that are framed as problems.
Was there any particular point that you would like refuted?
It may seem like a bad thing from our perspective now; but its a bit of a strawman argument that it will certainly be something we do not want when the time comes that it is possible to do so.
This is absolutely what I am afraid of. Values themselves will be selected for and I don’t want my values to be ground up entirely to dust. Who’s to say that I will want to exist under a different value system, even as a part of some larger consciousness? What if consciousness is a waste of resources?
every day we wake up a slightly different version of the consciousness that went to sleep. In this way the entire of our conscious existence is undergoing small changes. Each day we wouldn’t even think to ask ourselves if we are the same person as yesterday, but if we could isolate the me of today and talk to the me of 10 years ago we would be able to notice the different clearly.
It is a fact of life that we take changes day by day. If that’s where we end up; I don’t think the you of today has anything to complain about because the you of every day in between gradually made the choices to end up there.
the you of today should contend with the you of every single day between now and the state that you dislike (lack of consciousness or whatever) before being able to hold a complaint about it.
Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can’t really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.
My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..
does that make sense? Or should I try to explain it again...?
You’re definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)
I think it is very unlikely- even in the worst case scenarios, I can’t imagine that superintelligence wouldn’t inherit some sort of value.
I don’t see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That’s the way the cookie crubles.
I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It’s familiar, but it’s trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can’t overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.
I think that one viable approach is to essentially live vicariously through our offspring. I find it enjoyable watching children solve problems that are difficult for them but are now trivial for me, and I think that the desire to teach skills and to appreciate the success of (for lack of a better word) less advanced people learning how to solve the same problems that I’ve solved could provide a very long sequence of Fun in the universe. Pre-singularity humans already essentially do this. Grandparents still enjoy life despite having solved virtually all of the trivial problems (and facing imminent big problems), and I think I’d be fine being an eternal grandparent to new humans or other forms of life. I can’t extrapolate that beyond the singularity, but it makes sense that if we intend to preserve our current values we will need someone to be in the situation where those values still matter, and if we can’t experience those situations ourselves then the offspring we care about are a good substitute. Morality issues of creating children may be an issue.
Another solution is a walled garden run by FAI that preserves the trivial problems humans like solving while but solves the big problems. This has a stronger possibility for value drift and I think people would value life a bit less if they knew it was ultimately a video game.
It’s also possible that upon reflection we’ll realize that our current values also let us care about hive-minds in the same way we care about our friends and family now. We would be different, alien to present selves, but with the ability to trace our values back to our present state and see that at no point did we sacrifice them for expediency or abandon them for their triviality. This seems like the least probable solution simply because our values are not special, they arose in our ancestral environment because they worked. That we enjoy them is an accident, and that they could fully encompass the post-singularity world seems a bit miraculous.
As a kid I always wondered about this in the context of religious heaven. What could a bunch of former humans possibly do for eternity that wouldn’t become terribly boring or involve complete loss of humanity? I could never answer that question, so perhaps it’s an {AI,god}-hard problem to coherently extrapolate human values.
What’s wrong with hive minds? As long as my ‘soul’ survives, I wouldn’t mind being part of some gigantic consciousness.
Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it.
What’s wrong with hive minds? As long as my ‘soul’ survives, I wouldn’t mind being part of some gigantic consciousness.
A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...
I don’t know if it’s the mainstream of transhumanist thought but it’s certainly a significant thread.
Information hazard warning: if your state of mind is again closer to “panic attack” and “grief” than to “calmer”, or if it’s not but you want to be very careful to keep it that way, then you don’t want to click this link.
I read it. Your warning did the opposite of what you intended, and the fact that you posted it at all is an incredible error of judgment. Did you even take ten seconds to think this through?
Anyway, the piece wasn’t very convincing and I’ve already considered almost everything that was in it. No real harm done. This time.
I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.
Oh, huh, umm. I certainly didn’t want to cause anyone panic attacks by writing that, though in retrospect I should have realized that it’s a bit of an information hazard.
I’m sorry.
If it’s any comfort, I feel that my arguments in that article are pretty plausible, but that predicting the future is such a difficult thing filled with unknown unknowns that the vast majority of “pretty plausible” predictions are going to be wrong.
That’s a bit of an oxymoron, but thanks for saying it. I’m calmer than I was in the morning and your argument seems less convincing also. I think the ‘singleton’ is the natural course of intelligent evolution, and fits the whole idea of AI.
What a weird thing!
The convincingness of an idea can depend very much on one’s mood. This is obvious in cases of clinical depression, but I think it is present in ordinary mental functioning as well. We tend to judge convincingness by narrative coherence rather than logic and evidence. The coherence is not just the coherence internal to the story, but its coherence with one’s own feelings and experiences. As the latter change, so does the convincingness of the story.
Hypothesis: Ideas that retain their convincingness in the long term do so not by being especially rigorously argued or supported by solid evidence, but by constituting a large enough, coherent enough story to crowd out influence from day to day experience. It is the experience that will be interpreted in the light of the story rather than the other way round.
I think religions fit the bill pretty nicely.
I think pop science does as well.
Ask most people what they imagine a better life and a better world might be, and they will rarely imagine anything more than the present evils removed. Less disease, less starvation, less drudgery, less killing, less oppression. Their positive vision is merely the opposite of these: more health, more food, more fun, more love, more freedom.
When cranked up to transhuman levels, this looks like no ignorance, instant access to all knowledge, no stupidity, unlimited intelligence, no disease, unlimited lifespan, no technological limits, unlimited technological superpower, less environmental cramping, expansion across the universe.
What will people do, when almost everything they currently do is driven by exactly those limits that it is the transhuman vision to eliminate? Think of everything you have done today—how much of it would a transhuman you in a transhuman world have done?
I got out of bed. Sleep? What need has a transhuman of sleep? I showered, unloaded the washing machine that had run overnight, ate breakfast. Surely these and a great deal more stand in the same relation to a transhuman life as the drudgery of a 13th century peasant does to my own. I am typing on a keyboard. A keyboard! How primitive! Later today I will have taiko practice. Practice? Surely we will download such skills, or build robots to do them for us? I value the physical exertion. Exertion? What need, when we are uploads using whatever physical apparatus we choose, which will always run flawlessly?
The vision usually looks like having machines to do our living for us, leaving us as mere epiphenomena of a world that runs itself. We might think that “we” are colonising the galaxy, while to any other species observing, we might just look like a madly expanding sphere of von Neumann machines, with no valuable personhood present. Such is the vision of Utopia that results from imagining the future as being the present, but better, extrapolated without bound.
The Fun Sequence (long version, short version) says a lot about what sort of thing makes for a genuine Utopia, but I don’t think it contains examples of a day in the life. Perhaps it cannot, any more than a 13th century peasant’s dreams could contain anything resembling the modern world. One attempt I saw, which I can’t now find, imagined (this is my interpretation of it, not the way it was presented) a future that amounted to better BDSM scenes. This strikes me as about as realistic as a million years of sex with catgirls.
What I would want to do, or what I think I would do? I certainly would want to hold on my values, but I’m not yet sure which ones.
I don’t see how you can just crank these very specialized phenomena several orders of magnitude higher and still remain remotely human. That’s the point of the essay- we wind up as something we would view as monstrous today.
I’ve had existential crises thinking about such things. Stuff like living forever or having my brain upgraded beyond recognition scare me, for reasons I can’t quite put into words.
I’m comforted by the argument that it won’t happen overnight. We will probably gradually transition into such a world and it won’t feel so weird and shocking. And if we get it right, the AI will ask us what we want, and present us with arguments for and against our options, so we can decide what we actually want. Not just get stuck in a shitty future we wouldn’t want.
I don’t know that there is a rebuttal. Wireheading goes all the way back to Homer:
--The Odyssey
The solution there seems to be not to do it in the first place. It has long been a theme of dystopian fiction that our technology will erode or destroy what it means to be human. Playing around with the brain is definitely going to change things, most likely in ways we can’t quite predict—not to mention any accidental damage caused by novel methods. The only rebuttal I can think of is that our current technology is too crude and barbarous to make such modifications worth the drawbacks.
I think there’s still some solace though. There’s always a reaction to technology that tries to become too invasive. Many are willing to use drugs to regulate their moods, but there’s also strong counter-pressure. New technology doesn’t spread overnight. We’ll have plenty of examples of brain-modified people before the methods become widespread. I’d like to think people will be able to decide whether it’s worth the cost before they jump headlong into it.
I don’t believe that it’s mainstream transhumanist thought, in part because most people who’d call themselves transhumanists have not been exposed to the relevant arguments.
Does that help? No?
The problem with this vision of the future is that it’s nearly basilisk-like in its horror. As you said, you had a panic attack; others will reject it out of pure denial that things can be this bad, or perform motivated cognition to find reasons why it won’t actually happen. What I’ve never seen is a good rebuttal.
If it’s any consolation, I don’t think the possibility really makes things that much worse. It constrains FAI design a little more, perhaps, but the no-FAI futures already looked pretty bleak. A good FAI will avoid this scenario right along with all the ones we haven’t thought of yet.
The writer did seem to think that it was very likely. But he dismisses the idea of FAI being a singleton.
A few rebuttals. Race to the bottom only works in a universe where there is a reason to keep getting lower. where in a petri dish; if you don’t replicate fast enough you die; thats a strong selective pressure; in human-world reality there is no such pressure of equivalence. (I also disagree with rat-island) for this reason. Where there is not a benefit to get lower; it won’t happen. Evolution seems to happen by two main pressures; slow selection of the most-fittest and sudden selection by major environmental factors (with a range of both in between). For the sake of argument; the cutest of humans procreate; and the lesser ones—less so; but no one survives the next meteor strike. only the cockroaches which then have to grow under slow pressures until the next big pressure.
as for distinct mind—we (as humanity) would only go down the path of non-distinct mind if we wanted to. It may seem like a bad thing from our perspective now; but its a bit of a strawman argument that it will certainly be something we do not want when the time comes that it is possible to do so. I am not concerned for such far-away situations that are framed as problems.
Was there any particular point that you would like refuted?
This is absolutely what I am afraid of. Values themselves will be selected for and I don’t want my values to be ground up entirely to dust. Who’s to say that I will want to exist under a different value system, even as a part of some larger consciousness? What if consciousness is a waste of resources?
every day we wake up a slightly different version of the consciousness that went to sleep. In this way the entire of our conscious existence is undergoing small changes. Each day we wouldn’t even think to ask ourselves if we are the same person as yesterday, but if we could isolate the me of today and talk to the me of 10 years ago we would be able to notice the different clearly.
It is a fact of life that we take changes day by day. If that’s where we end up; I don’t think the you of today has anything to complain about because the you of every day in between gradually made the choices to end up there.
the you of today should contend with the you of every single day between now and the state that you dislike (lack of consciousness or whatever) before being able to hold a complaint about it.
So? I don’t think you’re really getting my point here. If consciousness is fluid or imperfect, it doesn’t mean that it is worthless.
Yes; I don’t think I was getting your point.
Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can’t really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.
My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..
does that make sense? Or should I try to explain it again...?
You’re definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)
I think it is very unlikely- even in the worst case scenarios, I can’t imagine that superintelligence wouldn’t inherit some sort of value.
I don’t see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That’s the way the cookie crubles.
Are you expecting these things to happen within your lifetime?
Probably not within my own natural lifetime, no.
I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It’s familiar, but it’s trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can’t overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.
I think that one viable approach is to essentially live vicariously through our offspring. I find it enjoyable watching children solve problems that are difficult for them but are now trivial for me, and I think that the desire to teach skills and to appreciate the success of (for lack of a better word) less advanced people learning how to solve the same problems that I’ve solved could provide a very long sequence of Fun in the universe. Pre-singularity humans already essentially do this. Grandparents still enjoy life despite having solved virtually all of the trivial problems (and facing imminent big problems), and I think I’d be fine being an eternal grandparent to new humans or other forms of life. I can’t extrapolate that beyond the singularity, but it makes sense that if we intend to preserve our current values we will need someone to be in the situation where those values still matter, and if we can’t experience those situations ourselves then the offspring we care about are a good substitute. Morality issues of creating children may be an issue.
Another solution is a walled garden run by FAI that preserves the trivial problems humans like solving while but solves the big problems. This has a stronger possibility for value drift and I think people would value life a bit less if they knew it was ultimately a video game.
It’s also possible that upon reflection we’ll realize that our current values also let us care about hive-minds in the same way we care about our friends and family now. We would be different, alien to present selves, but with the ability to trace our values back to our present state and see that at no point did we sacrifice them for expediency or abandon them for their triviality. This seems like the least probable solution simply because our values are not special, they arose in our ancestral environment because they worked. That we enjoy them is an accident, and that they could fully encompass the post-singularity world seems a bit miraculous.
As a kid I always wondered about this in the context of religious heaven. What could a bunch of former humans possibly do for eternity that wouldn’t become terribly boring or involve complete loss of humanity? I could never answer that question, so perhaps it’s an {AI,god}-hard problem to coherently extrapolate human values.
What’s wrong with hive minds? As long as my ‘soul’ survives, I wouldn’t mind being part of some gigantic consciousness.
Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it.
I appreciate the long response.
A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...
I don’t know if it’s the mainstream of transhumanist thought but it’s certainly a significant thread.
Information hazard warning: if your state of mind is again closer to “panic attack” and “grief” than to “calmer”, or if it’s not but you want to be very careful to keep it that way, then you don’t want to click this link.
I read it. Your warning did the opposite of what you intended, and the fact that you posted it at all is an incredible error of judgment. Did you even take ten seconds to think this through?
Anyway, the piece wasn’t very convincing and I’ve already considered almost everything that was in it. No real harm done. This time.
(Shame on you!)