In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I’m supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can’t I dis-identify from them?
I’m not sure I actually understand what you mean by “dis-identify”.
If those questions don’t have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn’t I make realism the working assumption, if I’m even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn’t matter anyway—whatever answers I get from using that assumption, including nihilism, still can’t be wrong.
So Pascal’s Wager?
In any case, while there aren’t wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.
In any case, while there aren’t wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.
Then there is fact of the matter about which answers are moral, and we might as well call those that aren’t, “incorrect”.
Then there is fact of the matter about which answers are moral, and we might as well call those that aren’t, “incorrect”.
It seems like a waste to overload the meaning of the word “incorrect” to also include such things as “Fuck off! That doesn’t satisfy socially oriented aspects of my preferences. I wish to enforce different norms!”
It really is useful to emphasize a carve in reality between ‘false’ and ‘evil/bad/immoral’. Humans are notoriously bad at keeping the concepts distinct in their minds and allowing ‘incorrect’ (and related words) to be used for normative claims encourages even more motivated confusion.
No. Moral properties don’t exist. What I’m doing, per the post, when I say “There are immoral answers” is expressing an emotional dissatisfaction to certain answers.
I’m not sure I actually understand what you mean by “dis-identify”.
So Pascal’s Wager?
In any case, while there aren’t wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.
Which question exactly?
Then there is fact of the matter about which answers are moral, and we might as well call those that aren’t, “incorrect”.
It seems like a waste to overload the meaning of the word “incorrect” to also include such things as “Fuck off! That doesn’t satisfy socially oriented aspects of my preferences. I wish to enforce different norms!”
It really is useful to emphasize a carve in reality between ‘false’ and ‘evil/bad/immoral’. Humans are notoriously bad at keeping the concepts distinct in their minds and allowing ‘incorrect’ (and related words) to be used for normative claims encourages even more motivated confusion.
No. Moral properties don’t exist. What I’m doing, per the post, when I say “There are immoral answers” is expressing an emotional dissatisfaction to certain answers.
True.