I’m not convinced by the metaphor of the soldier dying for his flag. I acknowledge it’s plausible that many soldiers historically died in this spirit. We can see it as acceptance of the world, as absurd as it may appear. Engaged adherence could be seen as a form of existentialism (as well as rebellion), while a nihilist would deny any value in engagement and just look at Moloch in the eyes.
But to me, such a relativist position is not consistent. I mean, as already pointed out in the post, if you think that a value, symbolized by the flag, is relative and does not, rationally, have more merit than those of the opposite side, why would you give your life for it in the first place? Your own life is something that almost everyone, except a hardcore nihilist, would acknowledge as bearing real value for the agent. There is a cognitive dissonance in the idea that the flag’s value is undermined but that you would still give up your stronger value for it.
Moral realism may be imperfect, but, as also pointed out in the post, it is sometimes rationally backed up by game theory. In many multi-turn complex games, optimal strategies imply cooperation. Cooperation or motivated altruism is a real, rational thing.
But, while I agree with the OP that game theory is also sometimes a bitch, I think that what’s lacking in the game-theoretic foundation of a (weak) moral realism can be largely corrected if you apply Rawls’s veil of ignorance to it. Think of game theory, but in a situation of generalized incertitude where you don’t know which player you’ll be, and you’re not even sure of the rules. Now out of this chaos the objectively rational choice would be to seek common interest or common good, or at least lesser suffering, in a reasonable or Bayesian way.
Indeed, life seems to be a very complex multi-turn game, dominated by uncertainty. We’re walking in a misty veil. Even identity is not a trivial question (what is “me”? Are my children part of me or entirely separate persons without common interest? Are my brothers and sisters? Are other humans? Are other things constituting this universe?). Perhaps it is even less a simple question for AI or uploaded minds. Maybe the wiser you are and the less you treat it as a trivial matter. Even among humans, sophisticated people seems less confident on these questions than the layman. In my opinion, it’s hard to dismiss the possibility of moral realism, at least in a weak form.
However, I agree that is remains a very speculative argument that would only slightly affect doom expectations.
From my perspective, arguments for moral realism are generally in one of these three categories, all of which are bad:
(1) Any agent should recognize X as good, because X is associated with Y which is self-evidently good
…But that begs the question—why is Y self-evidently good?
(2) Any agent should recognize X as good, because X leads to that agent feeling more pleasure and less pain, which is good for the agent by definition
…But that’s not an argument for intrinsically valuing X. Smart agents will pursue X via means-end reasoning, even if they don’t care about X for its own sake.
(3) OK maybe it’s NOT true that any agent can reason its way to recognizing X as good. For example, maybe it’s really true that, if I were born a sadistic sociopath, then I would see hurting people as intrinsically good, and no amount of reflection would change my mind. But I don’t care, hurting people is still Not Good. Goodness is a thing, and that ain’t it.
This gets into a pointless version of the moral realism vs anti-realism debate which is merely semantic / definitions, and which doesn’t affect any actual decisions or predictions.
So: when you say “In many multi-turn complex games, optimal strategies imply cooperation”, that seems to be in category (2).
Then you say “what’s lacking in the game-theoretic foundation of a (weak) moral realism can be largely corrected if you apply Rawls’s veil of ignorance to it”. That seems to be in category (1). Thus the sadistic sociopath can reply: “And why exactly should I care what decisions I would make under Rawls’s veil of ignorance? I’m not under Rawls’s veil of ignorance!”
And you say “if you think that a value, symbolized by the flag, is relative and doesn’t rationally merit more than those of the opposite side, why would you give your life for it in the first place”. But that’s a big “if”. The “if” is assuming the conclusion. If I had been born a paperclip maximizer, I would think that paperclips are the best possible thing, and that it’s right and proper to replace everything else in the universe with paperclips. But I wasn’t born a paperclip maximizer, I like love and friendship and whatever. Does that make my desire for love and friendship and whatever “relative” and “not rational”? Beats me, I don’t know exactly what you mean by that. But if it does, then I would seem to not care that my values are “relative” and “not rational”. I stand firm that wiping out all life in the universe and replacing it with paperclips is bad. I don’t care if that makes my values “relative”, whatever that means. I think you’re again doing the (1) thing of treating a certain kind of impartiality-of-values as self-evidently good.
Thank you for this very thoughtful post.
I’m not convinced by the metaphor of the soldier dying for his flag. I acknowledge it’s plausible that many soldiers historically died in this spirit. We can see it as acceptance of the world, as absurd as it may appear. Engaged adherence could be seen as a form of existentialism (as well as rebellion), while a nihilist would deny any value in engagement and just look at Moloch in the eyes.
But to me, such a relativist position is not consistent. I mean, as already pointed out in the post, if you think that a value, symbolized by the flag, is relative and does not, rationally, have more merit than those of the opposite side, why would you give your life for it in the first place? Your own life is something that almost everyone, except a hardcore nihilist, would acknowledge as bearing real value for the agent. There is a cognitive dissonance in the idea that the flag’s value is undermined but that you would still give up your stronger value for it.
Moral realism may be imperfect, but, as also pointed out in the post, it is sometimes rationally backed up by game theory. In many multi-turn complex games, optimal strategies imply cooperation. Cooperation or motivated altruism is a real, rational thing.
But, while I agree with the OP that game theory is also sometimes a bitch, I think that what’s lacking in the game-theoretic foundation of a (weak) moral realism can be largely corrected if you apply Rawls’s veil of ignorance to it. Think of game theory, but in a situation of generalized incertitude where you don’t know which player you’ll be, and you’re not even sure of the rules. Now out of this chaos the objectively rational choice would be to seek common interest or common good, or at least lesser suffering, in a reasonable or Bayesian way.
Indeed, life seems to be a very complex multi-turn game, dominated by uncertainty. We’re walking in a misty veil. Even identity is not a trivial question (what is “me”? Are my children part of me or entirely separate persons without common interest? Are my brothers and sisters? Are other humans? Are other things constituting this universe?). Perhaps it is even less a simple question for AI or uploaded minds. Maybe the wiser you are and the less you treat it as a trivial matter. Even among humans, sophisticated people seems less confident on these questions than the layman. In my opinion, it’s hard to dismiss the possibility of moral realism, at least in a weak form.
However, I agree that is remains a very speculative argument that would only slightly affect doom expectations.
From my perspective, arguments for moral realism are generally in one of these three categories, all of which are bad:
(1) Any agent should recognize X as good, because X is associated with Y which is self-evidently good
…But that begs the question—why is Y self-evidently good?
(2) Any agent should recognize X as good, because X leads to that agent feeling more pleasure and less pain, which is good for the agent by definition
…But that’s not an argument for intrinsically valuing X. Smart agents will pursue X via means-end reasoning, even if they don’t care about X for its own sake.
(3) OK maybe it’s NOT true that any agent can reason its way to recognizing X as good. For example, maybe it’s really true that, if I were born a sadistic sociopath, then I would see hurting people as intrinsically good, and no amount of reflection would change my mind. But I don’t care, hurting people is still Not Good. Goodness is a thing, and that ain’t it.
This gets into a pointless version of the moral realism vs anti-realism debate which is merely semantic / definitions, and which doesn’t affect any actual decisions or predictions.
So: when you say “In many multi-turn complex games, optimal strategies imply cooperation”, that seems to be in category (2).
Then you say “what’s lacking in the game-theoretic foundation of a (weak) moral realism can be largely corrected if you apply Rawls’s veil of ignorance to it”. That seems to be in category (1). Thus the sadistic sociopath can reply: “And why exactly should I care what decisions I would make under Rawls’s veil of ignorance? I’m not under Rawls’s veil of ignorance!”
And you say “if you think that a value, symbolized by the flag, is relative and doesn’t rationally merit more than those of the opposite side, why would you give your life for it in the first place”. But that’s a big “if”. The “if” is assuming the conclusion. If I had been born a paperclip maximizer, I would think that paperclips are the best possible thing, and that it’s right and proper to replace everything else in the universe with paperclips. But I wasn’t born a paperclip maximizer, I like love and friendship and whatever. Does that make my desire for love and friendship and whatever “relative” and “not rational”? Beats me, I don’t know exactly what you mean by that. But if it does, then I would seem to not care that my values are “relative” and “not rational”. I stand firm that wiping out all life in the universe and replacing it with paperclips is bad. I don’t care if that makes my values “relative”, whatever that means. I think you’re again doing the (1) thing of treating a certain kind of impartiality-of-values as self-evidently good.