I do not have answers to the question I raise here.
Historical anecdotes.
Back in the stone age — I think something like the 1960′s or 1970′s, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would automatically know that, and would make out a list of what was needed. A secretary would type that up into an order to post to a supplier, and a secretary there would input that into their computer, which would send the goods out. The writer’s response was “what do you need all those secretaries for?”
Back in the bronze age, when spam was a recent invention (the mid-90′s), there was one example I saw that was a reductio ad absurdum of fraudulent business proposals. I wish I’d kept it, because it was so perfect of its type. It offered the mark a supposed business where they would accept orders for goods, which the business staff that the spammer provided (imaginary, of course) would zealously process and send out on the mark’s behalf, for which the mark would receive an income. The obvious question about this supposed business is, what does it need the sucker for? The real answer is, to pay the spammer money for this non-existent opportunity. If the business was as advertised, the person receiving the proposal would be superfluous to its operation, an unconnected gear spinning uselessly.
Dead while thinking
Many people’s ideas of a glorious future look very much like being an unconnected gear spinning uselessly. The vision is of everything desirable happening effortlessly and everything undesirable going away. Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless. Hack yourself to make everything you think you should be doing fun fun fun. Hack your brain to be happy.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Got a presentation to make? The AI will write a report, and summarise it, and generate PowerPoint slides, and the audience’s AIs will summarise it and give them an action plan, and what do you need any of those people for?
Why climb Kilimanjaro if a robot can carry you up? Why paint, if Midjourney will do it better than you ever will? Why write poetry or fiction, or music? Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
When the AI does everything, what do you do? What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The human denizens of the Wall-E movie are couch potatoes who can barely stand up, but that is only one particular imagining of the situation. When a magnificent body is just one more of the things that is yours for the asking, what will you do with it in paradise?
Some people even want to say “goodbye cruel world” and wirehead themselves.
Iain M. Banks imagined a glorious future in the form of the Culture, but he had to set his stories in the places where the Culture’s writ runs weakly. There are otherwise no stories.
These are akin to the goals of dead people. In that essay, the goals are various ways of ensmallening oneself: not having needs, not bothering anyone, not being a burden, not failing, and so on. In the visions above, the goals sound more positive, but they aren’t. They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness in all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
And that’s without misalignment, which is a whole other subject. On the evidence of what people actually wish for, even an aligned wish-fulfilling machine is unaligned. How do we avoid ending up dead-while-thinking?
The vision is of everything desirable happening effortlessly and everything undesirable going away.
Citation needed. Particularly for that first part.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Why would I want to even be involved in creating software that somebody else wanted? Let them ask the computer themselves, if they need to ask. Why would I want to be in a world where I had to make or listen to a PowerPoint presentation of all things? Or a summary either?
Why do I care who needs me to do any of that?
Why climb Kilimanjaro if a robot can carry you up?
Because if the robot carries me, I haven’t climbed it. It’s not like the value comes from just being on the top.
Helicopters can fly that high right now, but people still walk to get there.
Why paint, if Midjourney will do it better than you ever will?
Because I like painting?
Does it bother you that almost anything you might want to do, and probably for most people anything at all that they might want to do, can already be done by some other human, beyond any realistic hope of equaling?
Do you feel dead because of that?
Why write poetry or fiction, or music?
For fun. Software, too.
Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
Because I won’t experience any of that infinite stream if I don’t read it?
What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The stuff I want includes doing something. Not because somebody else needs it. Not because it can’t be done better. Just because I feel like doing it. That includes putting in effort, and taking on things I might fail at.
Wanting to do things does not, however, imply that you don’t want to choose what you do and avoid things you don’t want to do.
If a person doesn’t have any internal wish to do anything, if they need somebody else’s motivations to substitute for their own… then the deadness is already within that person. It doesn’t matter whether some wish gets fulfilled or not. But I don’t think there are actually many people like that, if any at all.
They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness on all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
I think you’re seeing shadows of your own ideas there.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.
I think you’re seeing shadows of your own ideas there.
Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people wishing for, time and again, right up to wanting actual wireheading.
Scott Alexander wrote a cautionary tale of a device that someone would wear in their ear, that would always tell them the best thing for them to do, and was always right. The first thing it tells them is “don’t listen to me”, but (spoiler) if they do, it doesn’t end well for them.
Because I won’t experience any of that infinite stream if I don’t read it?
There are authors I would like to read, if only they hadn’t written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.
I think something like the Culture, with aligned superintelligent “ships” keeping humans as basically pets, wouldn’t be too bad. The ships would try to have thriving human societies, but that doesn’t mean granting all wishes—you don’t grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you’d be able to move up the spectrum from human to ship.
“The freedom I speak of, it is not that modest state desired by certain people when others oppress them. For then man becomes for man—a set of bars, a wall, a snare, a pit. The freedom I have in mind lies farther out, extends beyond that societal zone of reciprocal throat-throttling, for that zone may be passed through safely, and then, in the search for new constraints—since people no longer impose these on each other—one finds them in the world and in oneself, and takes up arms against the world and against oneself, to contend with both and make both subject to one’s will. And when this too is done, a precipice of freedom opens up, for now the more one has the power to accomplish, the less one knows what ought to be accomplished.”
Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.)
That said, current wishes of many people include things they want being done faster and easier; it’s just the more you extrapolate the less fraction wants that level of automation—just more divergence as you consider higher scale.
I suppose it does. That article was not in my mind at the time, but, well, let’s just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. “Pleasure” is not among my goals, and the poster’s vision of a universe of hedonium is to me one type of dead universe.
I do not have answers to the question I raise here.
Historical anecdotes.
Back in the stone age — I think something like the 1960′s or 1970′s, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would automatically know that, and would make out a list of what was needed. A secretary would type that up into an order to post to a supplier, and a secretary there would input that into their computer, which would send the goods out. The writer’s response was “what do you need all those secretaries for?”
Back in the bronze age, when spam was a recent invention (the mid-90′s), there was one example I saw that was a reductio ad absurdum of fraudulent business proposals. I wish I’d kept it, because it was so perfect of its type. It offered the mark a supposed business where they would accept orders for goods, which the business staff that the spammer provided (imaginary, of course) would zealously process and send out on the mark’s behalf, for which the mark would receive an income. The obvious question about this supposed business is, what does it need the sucker for? The real answer is, to pay the spammer money for this non-existent opportunity. If the business was as advertised, the person receiving the proposal would be superfluous to its operation, an unconnected gear spinning uselessly.
Dead while thinking
Many people’s ideas of a glorious future look very much like being an unconnected gear spinning uselessly. The vision is of everything desirable happening effortlessly and everything undesirable going away. Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless. Hack yourself to make everything you think you should be doing fun fun fun. Hack your brain to be happy.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Got a presentation to make? The AI will write a report, and summarise it, and generate PowerPoint slides, and the audience’s AIs will summarise it and give them an action plan, and what do you need any of those people for?
Why climb Kilimanjaro if a robot can carry you up? Why paint, if Midjourney will do it better than you ever will? Why write poetry or fiction, or music? Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
When the AI does everything, what do you do? What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The human denizens of the Wall-E movie are couch potatoes who can barely stand up, but that is only one particular imagining of the situation. When a magnificent body is just one more of the things that is yours for the asking, what will you do with it in paradise?
Some people even want to say “goodbye cruel world” and wirehead themselves.
Iain M. Banks imagined a glorious future in the form of the Culture, but he had to set his stories in the places where the Culture’s writ runs weakly. There are otherwise no stories.
These are akin to the goals of dead people. In that essay, the goals are various ways of ensmallening oneself: not having needs, not bothering anyone, not being a burden, not failing, and so on. In the visions above, the goals sound more positive, but they aren’t. They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness in all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
And that’s without misalignment, which is a whole other subject. On the evidence of what people actually wish for, even an aligned wish-fulfilling machine is unaligned. How do we avoid ending up dead-while-thinking?
Asking an AI would be missing the point.
Citation needed. Particularly for that first part.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
Why would I want to even be involved in creating software that somebody else wanted? Let them ask the computer themselves, if they need to ask. Why would I want to be in a world where I had to make or listen to a PowerPoint presentation of all things? Or a summary either?
Why do I care who needs me to do any of that?
Because if the robot carries me, I haven’t climbed it. It’s not like the value comes from just being on the top.
Helicopters can fly that high right now, but people still walk to get there.
Because I like painting?
Does it bother you that almost anything you might want to do, and probably for most people anything at all that they might want to do, can already be done by some other human, beyond any realistic hope of equaling?
Do you feel dead because of that?
For fun. Software, too.
Because I won’t experience any of that infinite stream if I don’t read it?
The stuff I want includes doing something. Not because somebody else needs it. Not because it can’t be done better. Just because I feel like doing it. That includes putting in effort, and taking on things I might fail at.
Wanting to do things does not, however, imply that you don’t want to choose what you do and avoid things you don’t want to do.
If a person doesn’t have any internal wish to do anything, if they need somebody else’s motivations to substitute for their own… then the deadness is already within that person. It doesn’t matter whether some wish gets fulfilled or not. But I don’t think there are actually many people like that, if any at all.
I think you’re seeing shadows of your own ideas there.
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.
Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people wishing for, time and again, right up to wanting actual wireheading.
Scott Alexander wrote a cautionary tale of a device that someone would wear in their ear, that would always tell them the best thing for them to do, and was always right. The first thing it tells them is “don’t listen to me”, but (spoiler) if they do, it doesn’t end well for them.
There are authors I would like to read, if only they hadn’t written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.
I think something like the Culture, with aligned superintelligent “ships” keeping humans as basically pets, wouldn’t be too bad. The ships would try to have thriving human societies, but that doesn’t mean granting all wishes—you don’t grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you’d be able to move up the spectrum from human to ship.
See also Stanislaw Lem on this subject:
See also.
Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.)
In essence, you prove the claim “Coherent Extrapolated Volition would not literally include everything desirable happening effortlessly and everything undesirable going away”. Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone?
That said, current wishes of many people include things they want being done faster and easier; it’s just the more you extrapolate the less fraction wants that level of automation—just more divergence as you consider higher scale.
I suppose it does. That article was not in my mind at the time, but, well, let’s just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. “Pleasure” is not among my goals, and the poster’s vision of a universe of hedonium is to me one type of dead universe.