Well I could give you a bunch of conformist bs, OR since this is lesswrong.com I could tell you how I actually feel.
The reason I care about benevolent AIs is not survival or about the human race. It’s more a personal reason. The reason I care is about pain and suffering. I am convinced (this is of course a matter of faith) that benevolent AIs would vastly improve my quality of life and malicious AIs could greatly reduce it (for example, by gruesomely killing me, or worse, inflicting a large amount of pain on me). Thus it is in my personal interests that AI be benevolent.
I don’t really care about the status quo. As long as AIs are benevolent, I don’t care if all of current human civilization (as it currently is) is destroyed and replaced by something else.
Now about the complexity of making AIs benevolent vs. making them smart, I don’t think a rational answer can be given, simply because we don’t know enough about what it would take to make them benevolent.
In the 50′s and 60′s, in science fiction, it was common to depict AIs as being able to understand speech easily but not being able to talk, or only being able to talk in simple sentences. People thought utterances were more difficult to program than understanding. A survey of current chatbots shows this to be ridiculously backward.
Well I could give you a bunch of conformist bs, OR since this is lesswrong.com I could tell you how I actually feel.
The reason I care about benevolent AIs is not survival or about the human race. It’s more a personal reason. The reason I care is about pain and suffering. I am convinced (this is of course a matter of faith) that benevolent AIs would vastly improve my quality of life and malicious AIs could greatly reduce it (for example, by gruesomely killing me, or worse, inflicting a large amount of pain on me). Thus it is in my personal interests that AI be benevolent.
I don’t really care about the status quo. As long as AIs are benevolent, I don’t care if all of current human civilization (as it currently is) is destroyed and replaced by something else.
Now about the complexity of making AIs benevolent vs. making them smart, I don’t think a rational answer can be given, simply because we don’t know enough about what it would take to make them benevolent.
In the 50′s and 60′s, in science fiction, it was common to depict AIs as being able to understand speech easily but not being able to talk, or only being able to talk in simple sentences. People thought utterances were more difficult to program than understanding. A survey of current chatbots shows this to be ridiculously backward.