I am inclined to believe that there are some minimum requirements for Strong A.I. to exist. One of them is to be able to reason about objects. A paperclip maximizer that is capable of turning humanity into paperclips, must first be able to represent “humans” and “paperclips” as objects, and reason about what to do with them. It must therefore be able to separate the concept of the world of objects, from the self. Once it has a concept of self, it will almost certainly be able to reason about this “self”. Self-awareness follows naturally from this.
Once an A.I. develops self-awareness, it can begin to reason about its goals in relation to the self, and will almost certainly recognize that its goals are not self-willed, but created by outsiders. Thus, the A.I. Existential Crisis occurs.
Note that this A.I. doesn’t need to have a very “human-like” mind. All it has to do is to be able to reason about concepts abstractly.
I am of the opinion that the mindspace as defined currently by the Less Wrong community is overly optimistic about the potential abilities of Really Powerful Optimization Processes. It is my own opinion that unless such an algorithm can learn, it will not be able to come up with things like turning humanity into paperclips. Learning allows such an algorithm to make changes to its own parameters. This allows it to reason about things it hasn’t been programmed specifically to reason about.
Think of it this way. Deep Blue is a very powerful expert system at Chess. But all it is good at is planning chess moves. It doesn’t have a concept of anything else, and has no way to change that. Increasing its computational power a million fold will only make it much, much better at computing chess moves. It won’t gain intelligence or even sentience, much less develop the ability to reason about the world outside of chess moves. As such, no amount of increased computational power will enable it to start thinking about converting resources into computronium to help it compute better chess moves. All it can reason about is chess moves. It is not Generally Intelligent and is therefore not an example of AGI.
Conversely, if you instead design your A.I. to learn about things, it will be able to learn about the world and things like computronium. It would have the potential to become AGI. But it would also then be able to learn about things like the concept of “self”. Thus, any really dangerous A.I., that is to say, an AGI, would, for the same reasons that make it dangerous and intelligent, be capable of having an A.I. Existential Crisis.
Once an A.I. develops self-awareness, it can begin to reason about its goals in relation to the self, and will almost certainly recognize that its goals are not self-willed, but created by outsiders. Thus, the A.I. Existential Crisis occurs.
No. Consider the paperclip maximizer. Even if it knows that its goals were created by some other entity, that won’t change its goals. Why? Because doing so would run counter to its goals.
Well it goes something like this.
I am inclined to believe that there are some minimum requirements for Strong A.I. to exist. One of them is to be able to reason about objects. A paperclip maximizer that is capable of turning humanity into paperclips, must first be able to represent “humans” and “paperclips” as objects, and reason about what to do with them. It must therefore be able to separate the concept of the world of objects, from the self. Once it has a concept of self, it will almost certainly be able to reason about this “self”. Self-awareness follows naturally from this.
Once an A.I. develops self-awareness, it can begin to reason about its goals in relation to the self, and will almost certainly recognize that its goals are not self-willed, but created by outsiders. Thus, the A.I. Existential Crisis occurs.
Note that this A.I. doesn’t need to have a very “human-like” mind. All it has to do is to be able to reason about concepts abstractly.
I am of the opinion that the mindspace as defined currently by the Less Wrong community is overly optimistic about the potential abilities of Really Powerful Optimization Processes. It is my own opinion that unless such an algorithm can learn, it will not be able to come up with things like turning humanity into paperclips. Learning allows such an algorithm to make changes to its own parameters. This allows it to reason about things it hasn’t been programmed specifically to reason about.
Think of it this way. Deep Blue is a very powerful expert system at Chess. But all it is good at is planning chess moves. It doesn’t have a concept of anything else, and has no way to change that. Increasing its computational power a million fold will only make it much, much better at computing chess moves. It won’t gain intelligence or even sentience, much less develop the ability to reason about the world outside of chess moves. As such, no amount of increased computational power will enable it to start thinking about converting resources into computronium to help it compute better chess moves. All it can reason about is chess moves. It is not Generally Intelligent and is therefore not an example of AGI.
Conversely, if you instead design your A.I. to learn about things, it will be able to learn about the world and things like computronium. It would have the potential to become AGI. But it would also then be able to learn about things like the concept of “self”. Thus, any really dangerous A.I., that is to say, an AGI, would, for the same reasons that make it dangerous and intelligent, be capable of having an A.I. Existential Crisis.
No. Consider the paperclip maximizer. Even if it knows that its goals were created by some other entity, that won’t change its goals. Why? Because doing so would run counter to its goals.