Eliezer: You’ve got various people insisting that an arbitrary mind, including an expected paperclip maximizer, would do various nice things or obey various comforting conditions: “Keep humans around, because diversity is important to creativity, and the humans will provide a different point of view.” Now you might want to seriously ask if, even granting that premise, you’d be kept in a nice house with air conditioning; or kept in a tiny cell with life support tubes and regular electric shocks if you didn’t generate enough interesting ideas that day (and of course you wouldn’t be allowed to die);
this seems unlikely to me. Humans are only creative under certain conditions, and the conditions you have described seem to be conducive to turning potentially creative humans into useless lumps of flesh who are insane.
or uploaded to a very small computer somewhere,
this is, in my opinion, a very likely outcome. Though one can argue about what “small” means.
and restarted every couple of years. No, let me guess, you’ll be more productive if you’re happy. So it’s clear why you want that to be the argument; but unlike you, the paperclip maximizer is not frantically searching for a reason not to torture you.
Sorry, the whole scenario is still around as unlikely as your carefully picking up ants on the sidewalk, rather than stepping on them, and keeping them in a happy ant colony for the sole express purpose of suggesting blog comments. There are reasons in my goal system to keep sentient beings alive, even if they aren’t “useful” at the moment. But from the perspective of a Bayesian superintelligence whose only terminal value is paperclips, it is not an optimal use of matter and energy toward the instrumental value of producing diverse and creative ideas for making paperclips, to keep around six billion highly similar human brains.
I don’t think that having six billion highly similar human brains is a good thing, so in this sense I am with the paperclip maximizer. Look at all the boring, generic, average lives that are lived today. Our confinement to human bodies is not a good thing as far as I am concerned. So I’m not taking the world as it is today, and arguing that Universal Instrumental Values will keep it exactly the way it is.
The reason I got interested in UIVs to start with is that I didn’t have a good way to decide what counted as a good outcome.
Eliezer: You’ve got various people insisting that an arbitrary mind, including an expected paperclip maximizer, would do various nice things or obey various comforting conditions: “Keep humans around, because diversity is important to creativity, and the humans will provide a different point of view.” Now you might want to seriously ask if, even granting that premise, you’d be kept in a nice house with air conditioning; or kept in a tiny cell with life support tubes and regular electric shocks if you didn’t generate enough interesting ideas that day (and of course you wouldn’t be allowed to die);
this seems unlikely to me. Humans are only creative under certain conditions, and the conditions you have described seem to be conducive to turning potentially creative humans into useless lumps of flesh who are insane.
or uploaded to a very small computer somewhere,
this is, in my opinion, a very likely outcome. Though one can argue about what “small” means.
and restarted every couple of years. No, let me guess, you’ll be more productive if you’re happy. So it’s clear why you want that to be the argument; but unlike you, the paperclip maximizer is not frantically searching for a reason not to torture you.
Sorry, the whole scenario is still around as unlikely as your carefully picking up ants on the sidewalk, rather than stepping on them, and keeping them in a happy ant colony for the sole express purpose of suggesting blog comments. There are reasons in my goal system to keep sentient beings alive, even if they aren’t “useful” at the moment. But from the perspective of a Bayesian superintelligence whose only terminal value is paperclips, it is not an optimal use of matter and energy toward the instrumental value of producing diverse and creative ideas for making paperclips, to keep around six billion highly similar human brains.
I don’t think that having six billion highly similar human brains is a good thing, so in this sense I am with the paperclip maximizer. Look at all the boring, generic, average lives that are lived today. Our confinement to human bodies is not a good thing as far as I am concerned. So I’m not taking the world as it is today, and arguing that Universal Instrumental Values will keep it exactly the way it is.
The reason I got interested in UIVs to start with is that I didn’t have a good way to decide what counted as a good outcome.