Why does she care about music and sunsets? Why would she have scope insensitivity bias? She’s programmed to care about the number, not the log, right? And if she was programmed to care about the log, she’d just care about the log, not be unable to appreciate the scope.
I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.
I imagine that it’s a good illustration of what a humanlike uploaded intelligence that’s had it’s goals/values scooped out and replaced with valuing paperclips might look like.
Indeed, and such an anthropomorphic optimizer would soon cease to be a paperclip optimizer at all if it could realize the “pointlessness” of its task and re-evaluate its goals.
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many “points of view”, many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences—as a form of glitch of general intelligence.
Well, glitch or not, I’m glad to have it; I would not want to be an unconscious automaton! As Socrates said, “The life which is unexamined is not worth living.”
However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.
It seems to me that the subject of your narrative has a single, simple and explicit purpose in life; she is after all a paperclip maximizer. I suspect that (outside of your narrative) one key thing that separates us natural GIs from AGIs is that we don’t have a “single, simple and explicit purpose in life”, and that, I suspect, is a good thing.
Good point. May I ask, is “explicit utility function” standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can’t tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don’t understand the orthagonality thesis.
They’re often called explicit goals not utility functions. Utility function is a terminology from a very specific moral philosophy.
Also note that the orthogonality thesis depends on an explicit goal structure. Without such an architecture it should be called the orthogonality hypothesis.
Why does she care about music and sunsets? Why would she have scope insensitivity bias? She’s programmed to care about the number, not the log, right? And if she was programmed to care about the log, she’d just care about the log, not be unable to appreciate the scope.
It reads to me like a human paperclip maximizer trying to apply lesswrong’s ideas.
I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.
I imagine that it’s a good illustration of what a humanlike uploaded intelligence that’s had it’s goals/values scooped out and replaced with valuing paperclips might look like.
Indeed, and such an anthropomorphic optimizer would soon cease to be a paperclip optimizer at all if it could realize the “pointlessness” of its task and re-evaluate its goals.
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many “points of view”, many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences—as a form of glitch of general intelligence.
Well, glitch or not, I’m glad to have it; I would not want to be an unconscious automaton! As Socrates said, “The life which is unexamined is not worth living.”
However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.
“I would not want to be an unconscious automaton!”
I strongly doubt that such sentence bear any meaning.
.
Maybe she cares about other things besides paperclips, including the innate desire to be able to name a single, simple and explicit purpose in life.
This is not supposed to be about non-human AGI paperclip maximisers.
It seems to me that the subject of your narrative has a single, simple and explicit purpose in life; she is after all a paperclip maximizer. I suspect that (outside of your narrative) one key thing that separates us natural GIs from AGIs is that we don’t have a “single, simple and explicit purpose in life”, and that, I suspect, is a good thing.
Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.
Good point. May I ask, is “explicit utility function” standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can’t tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don’t understand the orthagonality thesis.
No, I do not believe that it is standard terminology, though you can find a decent reference here.
They’re often called explicit goals not utility functions. Utility function is a terminology from a very specific moral philosophy.
Also note that the orthogonality thesis depends on an explicit goal structure. Without such an architecture it should be called the orthogonality hypothesis.
Substitute “Friendly AI” or “Positive Singularity” for “Paperclip Maximizing” and read again.