I often see people advocate others sacrifice their souls. People often justify lying, political violence, coverups of “your side’s” crimes and misdeeds, or professional misconduct of government officials and journalists, because their cause is sufficiently True and Just. I’m overall skeptical of this entire class of arguments.
This is not because I intrinsically value “clean hands” or seeming good over actual good outcomes. Nor is it because I have a sort of magical thinking common in movies, where things miraculously work out well if you just ignore tradeoffs.
Rather, it’s because I think the empirical consequences of deception, violence, criminal activity, and other norm violations are often (not always) quite bad, and people aren’t smart or wise enough to tell the exceptions apart from the general case, especially when they’re ideologically and emotionally compromised, as is often the case.
Instead, I think it often helps to be interpersonally nice, conduct yourself with honor, and overall be true to your internal and/or society-wide notions of ethics and integrity.
I’m especially skeptical of galaxy-brained positions where to be a hard-nosed consequentialist or whatever, you are supposed to do a specific and concrete Hard Thing (usually involving harming innocents) to achieve some large, underspecified, and far-off positive outcome.
I think it’s like those thought experiments about torturing a terrorist (or a terrorist’s child) to find the location of the a ticking nuclear bomb under Manhattan where somehow you know the torture would do it.
I mean, sure, if presented that way I’d think it’s a good idea but has anybody here checked the literature on the reliability of evidence extracted under torture? Is that really the most effective interrogation technique?
So many people seem eager to rush to sell their souls, without first checking to see if the Devil’s willing to fulfill his end of the bargain.
Rather, it’s because I think the empirical consequences of deception, violence, criminal activity, and other norm violations are often (not always) quite bad
I think I agree with the thrust of this, but I also think you are making a pretty big ontological mistake here when you call “deception” a thing in the category of “norm violations”. And this is really important, because it actually illustrates why this thing you are saying people aren’t supposed to do is actually tricky not to do.
Like, the most common forms of deception people engage in are socially sanctioned forms of deception. The most common forms of violence people engage in are socially sanctioned forms of violence. The most common form of cover-ups are socially sanctioned forms of cover-up.
Yes, there is an important way in which the subset of the socially sanctioned forms of deception and violence have to be individually relatively low-impact (since forms of deception or violence that have large immediate consequences usually lose their social sanctioning), but this makes the question of when to follow the norms and when to act with honesty against local norms, or engage in violence against local norms, a question of generalization. Many forms of extremely destructive deception, while socially sanctioned, might not lose their sanctioning before causing irreparable harm.
So most of the time the real question here is something like “when you are willing to violate local norms in order to do good?”, and I think “when honesty is violating local norms, be honest anyways” is a pretty good guidance, but the fact that it’s often against local norms is a really crucial aspect of the mechanisms here.
being dishonest when honesty would violate local norms doesn’t necessarily feel like selling your soul. like concretely, in most normal groups of people, it is considered a norm violation to tell someone their shirt is really ugly even if this is definitely true. so I would only tell someone this if I was sufficiently confident that they would take it well—maybe I’ve known them long enough that I know they like the honesty, or we are in a social setting where people expect it. imo, it doesn’t take galaxy brain consequentialism to arrive at this particular norm, or impugn one’s honor to comply with this norm.
I like the phrase “myopic consequentialism” for this, and it often has bad consequences because bounded agents need to cultivate virtues (distilled patterns that work well across many situations, even when you don’t have the compute or information on exactly why it’s good in many of those) to do well rather than trying to brute-force search in a large universe.
I personally find the “virtue is good because bounded optimization is too hard” framing less valuable/persuasive than the “virtue is good because your own brain and those of other agents are trying to trick you” framing. Basically, the adversarial dynamics seem key in these situations, otherwise a better heuristic might be to focus on the highest order bit first and then go down the importance ladder.
Though of course both are relevant parts of the story here.
To be the devil’s advocate (and also try to clarify my own confusions about this argument, even though I agree with it)...
Rather, it’s because I think the empirical consequences of deception, violence, criminal activity, and other norm violations are often (not always) quite bad, and people aren’t smart or wise enough to tell the exceptions apart from the general case, especially when they’re ideologically and emotionally compromised, as is often the case.
Why expect that the anti-social activity you’re hearing about is a representative sample of the anti-social activity that occurs? In particular, if you are committing an anti-social act which everybody knows about, then of course it will turn out badly for you & your goals, since people will punish you. Moreover, if we are hearing about some anti-social action, we know that something has gone wrong in the evil-doer’s plans, and therefore we ought to expect that many other things have also gone wrong, so the two observations (noting the anti-social action and noting the results did not go according to plan) are not independent.
I’m also pretty suspicious that a lot of the history we have, especially narrative history, has been spun or filtered as a way to argue for virtues and vices chosen largely independently of that history. That is, lots of history seems to be morality tales, especially history which is very memetically fit & salient.
It also seems relevant to note that if we look at the people (and organizations) currently with a lot of power, or who otherwise seem to be accomplishing many of their goals, they do not seem like they got that power by always telling the truth, never disobeying any laws, and always following strict Kantian deontology.
I find the above argument unconvincing, however I don’t think I could convincingly argue against it to someone who did find it convincing.
Thanks, this is a helpful point! The second one has been on my mind re: assassinations, and is implicitly part of my model for uncertainty about assassination effectiveness (I still think my original belief is largely correct, but I can’t rule out psy ops)
So many people seem eager to rush to sell their souls, without first checking to see if the Devil’s willing to fulfill his end of the bargain.
In cases like this I assume the point is to prove one’s willingness to be make the hard choice, not to be effective (possibly to the extent of being ineffective on purpose). This can be just proving to oneself, out of fear of being the kind of person who’s not able to make the hard choice — if I’m not in favor of torturing the terrorist, that might be because I’m squeamish (= weak, or limited in what I can think (= unsafe)), so I’d better favor doing it without thought of whether it’s a good idea.
I worry there is a bit of wishful thinking involved in the high number of upvotes. I struggle to usefully pinpoint where my vibes disagree, but my impression is that the world is rife with examples of the powerful behaving badly and totally getting away with it.
If I look at corporate scandals, the case for fraud and nefariousness seems pretty strong. Just off the top of my head: Dieselgate where Volkswagen deliberately deceived official testers about the level of pollution their cards produce, Bayer knowingly selling AIDS laced blood for transfusions, Goldman Sachs selling off the assets they knew to be worthless just prior to the crash 2008. Sure, often fines are involved if caught, but the reputation of the companies remains surprisingly intact and overall if you take into account all the cases where they were not caught I’d doubt the crimes were not worth it financially.
On an individual level it’s a bit murkier whether it’s worth it. I guess for most people the stress of violating norms and being caught will cause a net wellbeing loss. That notwithstanding, given how much our society defaults to trust, there are lots of low hanging fruit for slightly motivated and competent ruthless people. One example only: outright fabricating large parts of your CV seems relatively rare despite the low likelihood and consequences of being caught.
Agreed—You’re rationalizing niceness as a good default strategy because most people aren’t skilled at avoiding the consequences being mean. Reflecting on your overall argument, however, I think it’s slightly tortured because you’re feeling the tension of the is-ought distinction—Hume’s guillotine. Rational arguments for being nice feel morally necessary and therefore can be a bit pressured. There’s only so far we can push rational argumentation (elicitation of is) before we should simply acknowledge moral reality and say: “We ought to be nice”.
I often see people advocate others sacrifice their souls. People often justify lying, political violence, coverups of “your side’s” crimes and misdeeds, or professional misconduct of government officials and journalists, because their cause is sufficiently True and Just. I’m overall skeptical of this entire class of arguments.
This is not because I intrinsically value “clean hands” or seeming good over actual good outcomes. Nor is it because I have a sort of magical thinking common in movies, where things miraculously work out well if you just ignore tradeoffs.
Rather, it’s because I think the empirical consequences of deception, violence, criminal activity, and other norm violations are often (not always) quite bad, and people aren’t smart or wise enough to tell the exceptions apart from the general case, especially when they’re ideologically and emotionally compromised, as is often the case.
Instead, I think it often helps to be interpersonally nice, conduct yourself with honor, and overall be true to your internal and/or society-wide notions of ethics and integrity.
I’m especially skeptical of galaxy-brained positions where to be a hard-nosed consequentialist or whatever, you are supposed to do a specific and concrete Hard Thing (usually involving harming innocents) to achieve some large, underspecified, and far-off positive outcome.
I think it’s like those thought experiments about torturing a terrorist (or a terrorist’s child) to find the location of the a ticking nuclear bomb under Manhattan where somehow you know the torture would do it.
I mean, sure, if presented that way I’d think it’s a good idea but has anybody here checked the literature on the reliability of evidence extracted under torture? Is that really the most effective interrogation technique?
So many people seem eager to rush to sell their souls, without first checking to see if the Devil’s willing to fulfill his end of the bargain.
(x-posted from Substack)
I think I agree with the thrust of this, but I also think you are making a pretty big ontological mistake here when you call “deception” a thing in the category of “norm violations”. And this is really important, because it actually illustrates why this thing you are saying people aren’t supposed to do is actually tricky not to do.
Like, the most common forms of deception people engage in are socially sanctioned forms of deception. The most common forms of violence people engage in are socially sanctioned forms of violence. The most common form of cover-ups are socially sanctioned forms of cover-up.
Yes, there is an important way in which the subset of the socially sanctioned forms of deception and violence have to be individually relatively low-impact (since forms of deception or violence that have large immediate consequences usually lose their social sanctioning), but this makes the question of when to follow the norms and when to act with honesty against local norms, or engage in violence against local norms, a question of generalization. Many forms of extremely destructive deception, while socially sanctioned, might not lose their sanctioning before causing irreparable harm.
So most of the time the real question here is something like “when you are willing to violate local norms in order to do good?”, and I think “when honesty is violating local norms, be honest anyways” is a pretty good guidance, but the fact that it’s often against local norms is a really crucial aspect of the mechanisms here.
being dishonest when honesty would violate local norms doesn’t necessarily feel like selling your soul. like concretely, in most normal groups of people, it is considered a norm violation to tell someone their shirt is really ugly even if this is definitely true. so I would only tell someone this if I was sufficiently confident that they would take it well—maybe I’ve known them long enough that I know they like the honesty, or we are in a social setting where people expect it. imo, it doesn’t take galaxy brain consequentialism to arrive at this particular norm, or impugn one’s honor to comply with this norm.
I like the phrase “myopic consequentialism” for this, and it often has bad consequences because bounded agents need to cultivate virtues (distilled patterns that work well across many situations, even when you don’t have the compute or information on exactly why it’s good in many of those) to do well rather than trying to brute-force search in a large universe.
I personally find the “virtue is good because bounded optimization is too hard” framing less valuable/persuasive than the “virtue is good because your own brain and those of other agents are trying to trick you” framing. Basically, the adversarial dynamics seem key in these situations, otherwise a better heuristic might be to focus on the highest order bit first and then go down the importance ladder.
Though of course both are relevant parts of the story here.
To be the devil’s advocate (and also try to clarify my own confusions about this argument, even though I agree with it)...
Why expect that the anti-social activity you’re hearing about is a representative sample of the anti-social activity that occurs? In particular, if you are committing an anti-social act which everybody knows about, then of course it will turn out badly for you & your goals, since people will punish you. Moreover, if we are hearing about some anti-social action, we know that something has gone wrong in the evil-doer’s plans, and therefore we ought to expect that many other things have also gone wrong, so the two observations (noting the anti-social action and noting the results did not go according to plan) are not independent.
I’m also pretty suspicious that a lot of the history we have, especially narrative history, has been spun or filtered as a way to argue for virtues and vices chosen largely independently of that history. That is, lots of history seems to be morality tales, especially history which is very memetically fit & salient.
It also seems relevant to note that if we look at the people (and organizations) currently with a lot of power, or who otherwise seem to be accomplishing many of their goals, they do not seem like they got that power by always telling the truth, never disobeying any laws, and always following strict Kantian deontology.
I find the above argument unconvincing, however I don’t think I could convincingly argue against it to someone who did find it convincing.
Thanks, this is a helpful point! The second one has been on my mind re: assassinations, and is implicitly part of my model for uncertainty about assassination effectiveness (I still think my original belief is largely correct, but I can’t rule out psy ops)
Basically completely agreed, and I really like that last line.
Related essay: https://vitalik.eth.limo/general/2025/11/07/galaxybrain.html
In cases like this I assume the point is to prove one’s willingness to be make the hard choice, not to be effective (possibly to the extent of being ineffective on purpose). This can be just proving to oneself, out of fear of being the kind of person who’s not able to make the hard choice — if I’m not in favor of torturing the terrorist, that might be because I’m squeamish (= weak, or limited in what I can think (= unsafe)), so I’d better favor doing it without thought of whether it’s a good idea.
I worry there is a bit of wishful thinking involved in the high number of upvotes. I struggle to usefully pinpoint where my vibes disagree, but my impression is that the world is rife with examples of the powerful behaving badly and totally getting away with it.
If I look at corporate scandals, the case for fraud and nefariousness seems pretty strong. Just off the top of my head: Dieselgate where Volkswagen deliberately deceived official testers about the level of pollution their cards produce, Bayer knowingly selling AIDS laced blood for transfusions, Goldman Sachs selling off the assets they knew to be worthless just prior to the crash 2008. Sure, often fines are involved if caught, but the reputation of the companies remains surprisingly intact and overall if you take into account all the cases where they were not caught I’d doubt the crimes were not worth it financially.
On an individual level it’s a bit murkier whether it’s worth it. I guess for most people the stress of violating norms and being caught will cause a net wellbeing loss. That notwithstanding, given how much our society defaults to trust, there are lots of low hanging fruit for slightly motivated and competent ruthless people. One example only: outright fabricating large parts of your CV seems relatively rare despite the low likelihood and consequences of being caught.
Agreed—You’re rationalizing niceness as a good default strategy because most people aren’t skilled at avoiding the consequences being mean. Reflecting on your overall argument, however, I think it’s slightly tortured because you’re feeling the tension of the is-ought distinction—Hume’s guillotine. Rational arguments for being nice feel morally necessary and therefore can be a bit pressured. There’s only so far we can push rational argumentation (elicitation of is) before we should simply acknowledge moral reality and say: “We ought to be nice”.