The question Eliezer raises is the first problem any religious person has to face once he abandons the god thesis, i.e. why should I be good now? The answer, I believe, is that you cannot act contrary to your genetic nature. Our brains are wired (or have modules in Pinker terms) for various forms of altruism, for group survival reasons probably. I therefore can’t easily commit acts against my genetic nature, even if intellectually I can see they are in my best interests. (As Eliezer has already recognised this is why AI or uploaded personalities are so dangerous; they will be able to rewrite the brain code that prevents widespread selfishness. I say dangerous of course, because likely the first uploaded person or AI will not be me, so they will be a threat to me.)
More simply, the reason I don’t steal from people is not that stealing is wrong, but that my genetic programming (perhaps also an element of social conditioning) is such that I don’t want to steal, or have an active non-intellectual aversion to stealing.
Why do I try to convince you of this point of view if I am intellectually convinced that I should be selfish? I agree with Robin, it is because I am gentically programmed to do so, probably related to status seeking. Also, I genuinely would like to hear arguments againt this point of view, in case I am wrong.
Eliezer, genetics as a source of our ethical actions mean that it is unlikely we can ever develop a consistent ethical theory, if you accept this does this not present a big problem for your attempt to create an ethical AI? Is it possible your rejection of this approach to ethics and your attempt to prove a standalone moral system is perhaps subconciously driven by the impact this would have on your work?
The question Eliezer raises is the first problem any religious person has to face once he abandons the god thesis, i.e. why should I be good now? The answer, I believe, is that you cannot act contrary to your genetic nature. Our brains are wired (or have modules in Pinker terms) for various forms of altruism, for group survival reasons probably. I therefore can’t easily commit acts against my genetic nature, even if intellectually I can see they are in my best interests. (As Eliezer has already recognised this is why AI or uploaded personalities are so dangerous; they will be able to rewrite the brain code that prevents widespread selfishness. I say dangerous of course, because likely the first uploaded person or AI will not be me, so they will be a threat to me.)
More simply, the reason I don’t steal from people is not that stealing is wrong, but that my genetic programming (perhaps also an element of social conditioning) is such that I don’t want to steal, or have an active non-intellectual aversion to stealing.
Why do I try to convince you of this point of view if I am intellectually convinced that I should be selfish? I agree with Robin, it is because I am gentically programmed to do so, probably related to status seeking. Also, I genuinely would like to hear arguments againt this point of view, in case I am wrong.
Eliezer, genetics as a source of our ethical actions mean that it is unlikely we can ever develop a consistent ethical theory, if you accept this does this not present a big problem for your attempt to create an ethical AI? Is it possible your rejection of this approach to ethics and your attempt to prove a standalone moral system is perhaps subconciously driven by the impact this would have on your work?