“Don’t be misleading” is an axiological commandment—it’s about how to make the world a better place, and what you should hypothetically be aiming for absent other considerations.
“Don’t tell lies” is a moral commandment. It’s about how to implement a pale shadow of the axiological commandment on a system run by duty and reputation, where you have to contend with stupid people, exploitative people, etc.
(so for example, I agree with you that the Rearden Metal paragraph is misleading and bad. But it sounds a lot like the speech I give patients who ask for the newest experimental medication. “It passed a few small FDA trials without any catastrophic side effects, but it’s pretty common that this happens and then people discover dangerous problems in the first year or two of postmarketing surveillance. So unless there’s some strong reason to think the new drug is better, it’s better to stick with the old one that’s been used for decades and is proven safe.” I know and you know that there’s a subtle difference here and the Institute is being bad while I’m being good, but any system that tries to implement reputation loss for the Institute at scale, implemented on a mob of dumb people, is pretty likely to hurt me also. So morality sticks to bright-line cases, at the expense of not being able to capture the full axiological imperative.)
This is part of what you mean when you say the report-drafting scientist is “not a bad person”—they’ve followed the letter of the moral law as best they can in a situation where there are lots of other considerations, and where they’re an ordinary person as opposed to a saint laser-focused on doing the right thing at any cost. This is the situation that morality (as opposed to axiology) is designed for, your judgment (“I guess they’re not a bad person”) is the judgment that morality encourages you to give, and this shows the system working as designed, ie meeting its own low standards.
And then the legal commandment is merely “don’t outright lie under oath or during formal police interrogations”—which (impressively) is probably *still* too strong, in that we all hear about the police being able to imprison basically whoever they want by noticing small lies committed by accident or under stress.
The “wizard’s oath” feels like an attempt to subject one’s self to a stricter moral law than usual, while still falling far short of the demands of axiology.
This is part of what you mean when you say the report-drafting scientist is “not a bad person”—they’ve followed the letter of the moral law as best they can [...] your judgment (“I guess they’re not a bad person”) is the judgment that morality encourages you to give
So, from my perspective as an author (which, you know, could be wrong), that line was mostly a strategic political concession: there’s this persistent problem where when you try to talk about harms from people being complicit with systems of deception (not even to do anything about it, but just to talk about the problem), the discussion immediatelygets derailed on, “What?! Are you saying I’m a bad person!? How dare you!” … which is a much less interesting topic.
The first line of defense against this kind of derailing is to be very clear about what is being claimed (which is just good intellectual practice that you should be doing anyway): “By systems of deception, I mean processes that systematically result in less accurate beliefs—the English word ‘deception’ is often used with moralizing connotations, but I’m talking about a technical concept that I can implement as literal executable Python programs. Similarly, while I don’t yet have an elegant reduction of the underlying game theory corresponding to the word ‘complicity’ …”
The second line of defense is to throw the potential-derailer a bone in the form of an exculpatory disclaimer: “I’m not trying to blame anyone, I’m just saying that …” Even if (all other things being equal) you would prefer to socially punish complicity with systems of deception, by precomitting to relinquish the option to punish, you can buy a better chance of actually having a real discussion about the problem. (Making the precommitment credible is tough, though.)
Ironically, this is an instance of the same problem it’s trying to combat (“distorting communication to appease authority” and “distorting communication in order to appease people who are afraid you’re trying to scapegoat them on the pretext of them distorting communication to appease authority” are both instances of “distorting communication because The Incentives”), but hopefully a less severe one, whose severity is further reduced byexplaining that I’m doing it in the comments.
You can also think of the “I’m not blaming you, but seriously, this is harmful” maneuver as an interaction between levels: an axiological attempt to push for a higher moral standard in given community, while acknowledging that the community does not yet uphold the higher standard (analogous to moral attempt to institute tougher laws, while acknowledging that the sin in question is not a crime under current law).
noticing small lies committed by accident or under stress.
Lies committed “by accident”? What, like unconsciously? (Maybe the part of your brain that generated this sentence doesn’t disagree with Jessica about the meaning of the word lie as much as the part of your brain that argues about intensional definitions??)
At the risk of being self-aggrandizing, I think the idea of axiology vs. morality vs. law is helpful here.
“Don’t be misleading” is an axiological commandment—it’s about how to make the world a better place, and what you should hypothetically be aiming for absent other considerations.
“Don’t tell lies” is a moral commandment. It’s about how to implement a pale shadow of the axiological commandment on a system run by duty and reputation, where you have to contend with stupid people, exploitative people, etc.
(so for example, I agree with you that the Rearden Metal paragraph is misleading and bad. But it sounds a lot like the speech I give patients who ask for the newest experimental medication. “It passed a few small FDA trials without any catastrophic side effects, but it’s pretty common that this happens and then people discover dangerous problems in the first year or two of postmarketing surveillance. So unless there’s some strong reason to think the new drug is better, it’s better to stick with the old one that’s been used for decades and is proven safe.” I know and you know that there’s a subtle difference here and the Institute is being bad while I’m being good, but any system that tries to implement reputation loss for the Institute at scale, implemented on a mob of dumb people, is pretty likely to hurt me also. So morality sticks to bright-line cases, at the expense of not being able to capture the full axiological imperative.)
This is part of what you mean when you say the report-drafting scientist is “not a bad person”—they’ve followed the letter of the moral law as best they can in a situation where there are lots of other considerations, and where they’re an ordinary person as opposed to a saint laser-focused on doing the right thing at any cost. This is the situation that morality (as opposed to axiology) is designed for, your judgment (“I guess they’re not a bad person”) is the judgment that morality encourages you to give, and this shows the system working as designed, ie meeting its own low standards.
And then the legal commandment is merely “don’t outright lie under oath or during formal police interrogations”—which (impressively) is probably *still* too strong, in that we all hear about the police being able to imprison basically whoever they want by noticing small lies committed by accident or under stress.
The “wizard’s oath” feels like an attempt to subject one’s self to a stricter moral law than usual, while still falling far short of the demands of axiology.
(Thanks for your patience.)
So, from my perspective as an author (which, you know, could be wrong), that line was mostly a strategic political concession: there’s this persistent problem where when you try to talk about harms from people being complicit with systems of deception (not even to do anything about it, but just to talk about the problem), the discussion immediately gets derailed on, “What?! Are you saying I’m a bad person!? How dare you!” … which is a much less interesting topic.
The first line of defense against this kind of derailing is to be very clear about what is being claimed (which is just good intellectual practice that you should be doing anyway): “By systems of deception, I mean processes that systematically result in less accurate beliefs—the English word ‘deception’ is often used with moralizing connotations, but I’m talking about a technical concept that I can implement as literal executable Python programs. Similarly, while I don’t yet have an elegant reduction of the underlying game theory corresponding to the word ‘complicity’ …”
The second line of defense is to throw the potential-derailer a bone in the form of an exculpatory disclaimer: “I’m not trying to blame anyone, I’m just saying that …” Even if (all other things being equal) you would prefer to socially punish complicity with systems of deception, by precomitting to relinquish the option to punish, you can buy a better chance of actually having a real discussion about the problem. (Making the precommitment credible is tough, though.)
Ironically, this is an instance of the same problem it’s trying to combat (“distorting communication to appease authority” and “distorting communication in order to appease people who are afraid you’re trying to scapegoat them on the pretext of them distorting communication to appease authority” are both instances of “distorting communication because The Incentives”), but hopefully a less severe one, whose severity is further reduced by explaining that I’m doing it in the comments.
You can also think of the “I’m not blaming you, but seriously, this is harmful” maneuver as an interaction between levels: an axiological attempt to push for a higher moral standard in given community, while acknowledging that the community does not yet uphold the higher standard (analogous to moral attempt to institute tougher laws, while acknowledging that the sin in question is not a crime under current law).
Lies committed “by accident”? What, like unconsciously? (Maybe the part of your brain that generated this sentence doesn’t disagree with Jessica about the meaning of the word lie as much as the part of your brain that argues about intensional definitions??)