I am not convinced by the longer post either. I don’t see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious, nor, if conscious, that it would value the happiness of other conscious entities. Considering the moral variety of humans, from saints to devils, and our distance in intelligence from chimpanzees, I find it hard to believe that more dakka in that department is all it would take to make saints of us. And that’s among a single evolved species with a vast amount in common with each other. For aliens of unknown mental constitution, I would say that all bets are off.
I don’t see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious
I do not make those assumptions.
nor, if conscious, that it would value the happiness of other conscious entities
I don’t suppose that either, I give an argument for that (in the longer post).
Anyway:
I am not convinced by the longer post either
I’m not surprised: I don’t expect that my argument will move masses of people who are convinced of the opposite claim, but that someone who is uncertain and open-minded can read my argument and maybe find something useful in it and/or a reason to update their beliefs. That’s also why I wrote that the practical implications for AI are an important part of that post, and why I made some predictions instead of focusing just on philosophy.
But you were arguing for them, weren’t you? It is the arguments that fail to convince me. I was not treating these as bald assertions.
No, I don’t argue that “a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious”. I think those statements are false.
I am not convinced by the longer post either. I don’t see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious, nor, if conscious, that it would value the happiness of other conscious entities. Considering the moral variety of humans, from saints to devils, and our distance in intelligence from chimpanzees, I find it hard to believe that more dakka in that department is all it would take to make saints of us. And that’s among a single evolved species with a vast amount in common with each other. For aliens of unknown mental constitution, I would say that all bets are off.
Hey I think your comment is slightly misleading:
I do not make those assumptions.
I don’t suppose that either, I give an argument for that (in the longer post).
Anyway:
I’m not surprised: I don’t expect that my argument will move masses of people who are convinced of the opposite claim, but that someone who is uncertain and open-minded can read my argument and maybe find something useful in it and/or a reason to update their beliefs. That’s also why I wrote that the practical implications for AI are an important part of that post, and why I made some predictions instead of focusing just on philosophy.
But you were arguing for them, weren’t you? It is the arguments that fail to convince me. I was not treating these as bald assertions.
Being sure of a thing does not preclude my entertaining other ideas.
No, I don’t argue that “a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious”. I think those statements are false.