From my perspective, arguments for moral realism are generally in one of these three categories, all of which are bad:
(1) Any agent should recognize X as good, because X is associated with Y which is self-evidently good
…But that begs the question—why is Y self-evidently good?
(2) Any agent should recognize X as good, because X leads to that agent feeling more pleasure and less pain, which is good for the agent by definition
…But that’s not an argument for intrinsically valuing X. Smart agents will pursue X via means-end reasoning, even if they don’t care about X for its own sake.
(3) OK maybe it’s NOT true that any agent can reason its way to recognizing X as good. For example, maybe it’s really true that, if I were born a sadistic sociopath, then I would see hurting people as intrinsically good, and no amount of reflection would change my mind. But I don’t care, hurting people is still Not Good. Goodness is a thing, and that ain’t it.
This gets into a pointless version of the moral realism vs anti-realism debate which is merely semantic / definitions, and which doesn’t affect any actual decisions or predictions.
So: when you say “In many multi-turn complex games, optimal strategies imply cooperation”, that seems to be in category (2).
Then you say “what’s lacking in the game-theoretic foundation of a (weak) moral realism can be largely corrected if you apply Rawls’s veil of ignorance to it”. That seems to be in category (1). Thus the sadistic sociopath can reply: “And why exactly should I care what decisions I would make under Rawls’s veil of ignorance? I’m not under Rawls’s veil of ignorance!”
And you say “if you think that a value, symbolized by the flag, is relative and doesn’t rationally merit more than those of the opposite side, why would you give your life for it in the first place”. But that’s a big “if”. The “if” is assuming the conclusion. If I had been born a paperclip maximizer, I would think that paperclips are the best possible thing, and that it’s right and proper to replace everything else in the universe with paperclips. But I wasn’t born a paperclip maximizer, I like love and friendship and whatever. Does that make my desire for love and friendship and whatever “relative” and “not rational”? Beats me, I don’t know exactly what you mean by that. But if it does, then I would seem to not care that my values are “relative” and “not rational”. I stand firm that wiping out all life in the universe and replacing it with paperclips is bad. I don’t care if that makes my values “relative”, whatever that means. I think you’re again doing the (1) thing of treating a certain kind of impartiality-of-values as self-evidently good.
From my perspective, arguments for moral realism are generally in one of these three categories, all of which are bad:
(1) Any agent should recognize X as good, because X is associated with Y which is self-evidently good
…But that begs the question—why is Y self-evidently good?
(2) Any agent should recognize X as good, because X leads to that agent feeling more pleasure and less pain, which is good for the agent by definition
…But that’s not an argument for intrinsically valuing X. Smart agents will pursue X via means-end reasoning, even if they don’t care about X for its own sake.
(3) OK maybe it’s NOT true that any agent can reason its way to recognizing X as good. For example, maybe it’s really true that, if I were born a sadistic sociopath, then I would see hurting people as intrinsically good, and no amount of reflection would change my mind. But I don’t care, hurting people is still Not Good. Goodness is a thing, and that ain’t it.
This gets into a pointless version of the moral realism vs anti-realism debate which is merely semantic / definitions, and which doesn’t affect any actual decisions or predictions.
So: when you say “In many multi-turn complex games, optimal strategies imply cooperation”, that seems to be in category (2).
Then you say “what’s lacking in the game-theoretic foundation of a (weak) moral realism can be largely corrected if you apply Rawls’s veil of ignorance to it”. That seems to be in category (1). Thus the sadistic sociopath can reply: “And why exactly should I care what decisions I would make under Rawls’s veil of ignorance? I’m not under Rawls’s veil of ignorance!”
And you say “if you think that a value, symbolized by the flag, is relative and doesn’t rationally merit more than those of the opposite side, why would you give your life for it in the first place”. But that’s a big “if”. The “if” is assuming the conclusion. If I had been born a paperclip maximizer, I would think that paperclips are the best possible thing, and that it’s right and proper to replace everything else in the universe with paperclips. But I wasn’t born a paperclip maximizer, I like love and friendship and whatever. Does that make my desire for love and friendship and whatever “relative” and “not rational”? Beats me, I don’t know exactly what you mean by that. But if it does, then I would seem to not care that my values are “relative” and “not rational”. I stand firm that wiping out all life in the universe and replacing it with paperclips is bad. I don’t care if that makes my values “relative”, whatever that means. I think you’re again doing the (1) thing of treating a certain kind of impartiality-of-values as self-evidently good.