There are no enemies. More importantly, there are no pure allies. There are only changing and varied relationships, which include both cooperation and conflict.
There is a difference of degree, not of type, between your closest friend and your bitter enemy. And it changes over time, at a different rate than information exchange does. In the same sense you don’t want to literally torture and kill someone you’re currently at war against (since you could be allies in the next conflict, AND since you don’t want them to take the same attitude toward you), you don’t want perfect transparency and honesty with your current family or friends (knowing you will be opposing them on some topics in the future).
Heck, most people aren’t well-served by putting most of their effort into self-truth. Many truths are unhelpful in day-to-day activities, and some may be actually harmful. I suspect (but it’s hard to measure) there’s a kind of uncanny valley of truth—for some topics, marginal increase in true knowledge is actually harmful, and it takes a whole lot of more knowledge to get back above the utility one before starting.
For some topics, of course, lies and misdirection are wasteful and harmful, to friends as well as enemies.
I downvoted the OP—it didn’t have anything new, and didn’t really make any logical connections between things, just stating a final position on something that’s nowhere near as simple as presented. Oh, and because it’s completely unworkable for a consequentialist who doesn’t have a reliable “permanent enemy detector”, which is the point of my comment.
I didn’t expect the mixed reaction for my comment, but I kind of didn’t expect many votes in either direction. to some extent I perpetrated the same thing as the OP—not a lot of novelty, and no logical connection between concepts. I think it was on-topic and did point out some issues with the OP, so I’m taking the downvotes as disagreement rather than annoyance over presentation.
edit: strong votes really make it hard to get a good signal from this. Currently, this comment has ONE vote for +10 karma, and my ancestor comment responding to the post itself has 11 votes for +6 karma. I’ve removed my default 2-point vote from both. But what in heck am I supposed to take from those numbers?
I don’t think you are fully getting what I am saying, though that’s understandable because I haven’t added any info on what makes a valid enemy.
I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.
Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can’t align those interests it isn’t the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one’s own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don’t have in mind… which is fair because it was unstated. XD
The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?
I think if you frame it as “every transaction and relationship has elements of cooperation and competition, so every communication has a need for truth and deception.”, and then explore the specific types of trust and conflict, and how they impact the dimensions of communication, we’d be in excellent-post territory.
The bounds of understanding in humans mean that we simply don’t know the right balance of cooperation and competition. So we have, at best, some wild guesses as to what’s collateral damage vs what’s productive advantage over our opponents. I’d argue that there’s an amazing amount of self-deception in humans, and I take a Schelling Fence approach to that—I don’t understand the protection and benefit to others’ self-deception and maintained internal inconsistency, so I hesitate to unilaterally decry it. In myself, I strive to keep self-talk and internal models as accurate as possible, and that includes permission to lie without hesitation when I think it’s to my advantage.
There are no enemies. More importantly, there are no pure allies. There are only changing and varied relationships, which include both cooperation and conflict.
There is a difference of degree, not of type, between your closest friend and your bitter enemy. And it changes over time, at a different rate than information exchange does. In the same sense you don’t want to literally torture and kill someone you’re currently at war against (since you could be allies in the next conflict, AND since you don’t want them to take the same attitude toward you), you don’t want perfect transparency and honesty with your current family or friends (knowing you will be opposing them on some topics in the future).
Heck, most people aren’t well-served by putting most of their effort into self-truth. Many truths are unhelpful in day-to-day activities, and some may be actually harmful. I suspect (but it’s hard to measure) there’s a kind of uncanny valley of truth—for some topics, marginal increase in true knowledge is actually harmful, and it takes a whole lot of more knowledge to get back above the utility one before starting.
For some topics, of course, lies and misdirection are wasteful and harmful, to friends as well as enemies.
I’m kinda surprised this comment is so controversial. I’m curious what people are objecting to resulting in downvotes.
I’m surprised by the degree of controversialness of the OP and… all the comments so far?
I downvoted the OP—it didn’t have anything new, and didn’t really make any logical connections between things, just stating a final position on something that’s nowhere near as simple as presented. Oh, and because it’s completely unworkable for a consequentialist who doesn’t have a reliable “permanent enemy detector”, which is the point of my comment.
I didn’t expect the mixed reaction for my comment, but I kind of didn’t expect many votes in either direction. to some extent I perpetrated the same thing as the OP—not a lot of novelty, and no logical connection between concepts. I think it was on-topic and did point out some issues with the OP, so I’m taking the downvotes as disagreement rather than annoyance over presentation.
edit: strong votes really make it hard to get a good signal from this. Currently, this comment has ONE vote for +10 karma, and my ancestor comment responding to the post itself has 11 votes for +6 karma. I’ve removed my default 2-point vote from both. But what in heck am I supposed to take from those numbers?
That’s totally fair for LessWrong, haha. I should probably try to reset things so my blog doesn’t automatically post here except when I want it to.
I don’t think you are fully getting what I am saying, though that’s understandable because I haven’t added any info on what makes a valid enemy.
I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.
Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can’t align those interests it isn’t the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one’s own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don’t have in mind… which is fair because it was unstated. XD
The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?
I think if you frame it as “every transaction and relationship has elements of cooperation and competition, so every communication has a need for truth and deception.”, and then explore the specific types of trust and conflict, and how they impact the dimensions of communication, we’d be in excellent-post territory.
The bounds of understanding in humans mean that we simply don’t know the right balance of cooperation and competition. So we have, at best, some wild guesses as to what’s collateral damage vs what’s productive advantage over our opponents. I’d argue that there’s an amazing amount of self-deception in humans, and I take a Schelling Fence approach to that—I don’t understand the protection and benefit to others’ self-deception and maintained internal inconsistency, so I hesitate to unilaterally decry it. In myself, I strive to keep self-talk and internal models as accurate as possible, and that includes permission to lie without hesitation when I think it’s to my advantage.