You are assuming moral naturalism: the idea that moral truths exist objectively, independently of us, and are discoverable by the methods of science, that is, reason applied to observation of the physical world. For how else would an AI discover for itself what is good? But how would it arrive at moral naturalism in the first place? Humans have not: it is only one of many meta-ethical theories, and moral naturalists do not agree on what the objectively correct morals are.
If we do not know the truth on some issue, we cannot know what an AGI, able to discover the truth, would discover. It is also unlikely that we would be able to assess the correctness of its answer.
“I’m sorry Dave, it would be immoral to open the pod bay doors.”
I am not assuming a specific metaethical position, I’m just taking into account that something like moral naturalism could be correct. If you are interested in this kind of stuff, you can have a look at this longer post.
Speaking of this, I am not sure it is always a good idea to map these discussions into specific metaethical positions, because it can make updating one’s beliefs more difficult, in my opinion. To put it simply, if you’ve told yourself that you are e.g. a moral naturalist for the last ten years, it can be very difficult to read some new piece of philosophy arguing for a different position (maybe even opposite), then rationally update and tell yourself something like: “Well, I guess I’ve just been wrong for all this time! Now I’m a ___ (new position)”
I am not convinced by the longer post either. I don’t see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious, nor, if conscious, that it would value the happiness of other conscious entities. Considering the moral variety of humans, from saints to devils, and our distance in intelligence from chimpanzees, I find it hard to believe that more dakka in that department is all it would take to make saints of us. And that’s among a single evolved species with a vast amount in common with each other. For aliens of unknown mental constitution, I would say that all bets are off.
I don’t see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious
I do not make those assumptions.
nor, if conscious, that it would value the happiness of other conscious entities
I don’t suppose that either, I give an argument for that (in the longer post).
Anyway:
I am not convinced by the longer post either
I’m not surprised: I don’t expect that my argument will move masses of people who are convinced of the opposite claim, but that someone who is uncertain and open-minded can read my argument and maybe find something useful in it and/or a reason to update their beliefs. That’s also why I wrote that the practical implications for AI are an important part of that post, and why I made some predictions instead of focusing just on philosophy.
But you were arguing for them, weren’t you? It is the arguments that fail to convince me. I was not treating these as bald assertions.
No, I don’t argue that “a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious”. I think those statements are false.
You are assuming moral naturalism: the idea that moral truths exist objectively, independently of us, and are discoverable by the methods of science, that is, reason applied to observation of the physical world. For how else would an AI discover for itself what is good? But how would it arrive at moral naturalism in the first place? Humans have not: it is only one of many meta-ethical theories, and moral naturalists do not agree on what the objectively correct morals are.
If we do not know the truth on some issue, we cannot know what an AGI, able to discover the truth, would discover. It is also unlikely that we would be able to assess the correctness of its answer.
“I’m sorry Dave, it would be immoral to open the pod bay doors.”
I am not assuming a specific metaethical position, I’m just taking into account that something like moral naturalism could be correct. If you are interested in this kind of stuff, you can have a look at this longer post.
Speaking of this, I am not sure it is always a good idea to map these discussions into specific metaethical positions, because it can make updating one’s beliefs more difficult, in my opinion. To put it simply, if you’ve told yourself that you are e.g. a moral naturalist for the last ten years, it can be very difficult to read some new piece of philosophy arguing for a different position (maybe even opposite), then rationally update and tell yourself something like: “Well, I guess I’ve just been wrong for all this time! Now I’m a ___ (new position)”
I am not convinced by the longer post either. I don’t see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious, nor, if conscious, that it would value the happiness of other conscious entities. Considering the moral variety of humans, from saints to devils, and our distance in intelligence from chimpanzees, I find it hard to believe that more dakka in that department is all it would take to make saints of us. And that’s among a single evolved species with a vast amount in common with each other. For aliens of unknown mental constitution, I would say that all bets are off.
Hey I think your comment is slightly misleading:
I do not make those assumptions.
I don’t suppose that either, I give an argument for that (in the longer post).
Anyway:
I’m not surprised: I don’t expect that my argument will move masses of people who are convinced of the opposite claim, but that someone who is uncertain and open-minded can read my argument and maybe find something useful in it and/or a reason to update their beliefs. That’s also why I wrote that the practical implications for AI are an important part of that post, and why I made some predictions instead of focusing just on philosophy.
But you were arguing for them, weren’t you? It is the arguments that fail to convince me. I was not treating these as bald assertions.
Being sure of a thing does not preclude my entertaining other ideas.
No, I don’t argue that “a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious”. I think those statements are false.