That would be correct in some sense, but wouldn’t accomplish the goal of explaining to the victim why superintelligences don’t necessarily share our morals.
Yes, that was my first reaction also, if only because it’s possible to attack that premise without reference to tricky AI mumbo-jumbo. It would be mildly clever but rather misleading to apply the reversal test: “You think a superintelligence will tend towards superbenevolence, but allegedly-benevolent humans are doing so little to create the aforementioned superintelligence;—humans apparently aren’t as benevolent as they seem, so why think a superhuman intelligence will be disanalogously benevolent? Contradiction, sucka!” This argument is of course fallacious because humans spend more on AGI development than do frogs—the great chain of being argument holds.
Looking back at my comment I can see why it might read like I’m a hardcore moral relativist. I don’t think I am — although I’ve never been sure of what meta-ethicists’ terms like “moral relativist” mean exactly — I just left qualifiers out of my original post to keep it punchy.
(I don’t believe, for example, that telling right from wrong is impossible, if we interpret “telling right from wrong” to mean “making a moral judgement that most humans agree with”. The claim behind my “But we humans can’t even do that!” is a weaker one: there are some moral questions with no consensus answer, or where there is a consensus but some people flout it. In situations like these people sometimes even accuse other people outright of not knowing right from wrong, or incredulously ask, “don’t you know right from wrong?” I see no necessary reason why the same issues wouldn’t crop up for other, smarter intelligences.)
The claim behind my “But we humans can’t even do that!” is a weaker one: there are some moral questions with no consensus answer, or where there is a consensus but some people flout it. In situations like these people sometimes even accuse other people outright of not knowing right from wrong, or incredulously ask, “don’t you know right from wrong?”
Absence of consensus does not imply absence of objective truth
I see no necessary reason why the same issues wouldn’t crop up for other, smarter intelligences.
i don’t know about “necessary” but “they’re smarter” is possible and reasonably likely.
Did anyone else have their first reaction as wanting to attack the starting premise?
Victim: But surely a smart artificial intelligence will be able to tell right from wrong, if we humans can do that?
Me: But we humans can’t even do that!
That would be correct in some sense, but wouldn’t accomplish the goal of explaining to the victim why superintelligences don’t necessarily share our morals.
Yes, that was my first reaction also, if only because it’s possible to attack that premise without reference to tricky AI mumbo-jumbo. It would be mildly clever but rather misleading to apply the reversal test: “You think a superintelligence will tend towards superbenevolence, but allegedly-benevolent humans are doing so little to create the aforementioned superintelligence;—humans apparently aren’t as benevolent as they seem, so why think a superhuman intelligence will be disanalogously benevolent? Contradiction, sucka!” This argument is of course fallacious because humans spend more on AGI development than do frogs—the great chain of being argument holds.
Then open the prisons.
Ha.
Looking back at my comment I can see why it might read like I’m a hardcore moral relativist. I don’t think I am — although I’ve never been sure of what meta-ethicists’ terms like “moral relativist” mean exactly — I just left qualifiers out of my original post to keep it punchy.
(I don’t believe, for example, that telling right from wrong is impossible, if we interpret “telling right from wrong” to mean “making a moral judgement that most humans agree with”. The claim behind my “But we humans can’t even do that!” is a weaker one: there are some moral questions with no consensus answer, or where there is a consensus but some people flout it. In situations like these people sometimes even accuse other people outright of not knowing right from wrong, or incredulously ask, “don’t you know right from wrong?” I see no necessary reason why the same issues wouldn’t crop up for other, smarter intelligences.)
Absence of consensus does not imply absence of objective truth
i don’t know about “necessary” but “they’re smarter” is possible and reasonably likely.
Correct, but that doesn’t bear on my claim. Moral disagreements exist, whether or not there’s objective moral truth.
It’s possible, but I don’t know any convincing arguments for why it’s likely, while I can think of plausibility arguments for why it’s unlikely.