Maybe Lying Doesn’t Exist

In “Against Lie In­fla­tion”, the im­mor­tal Scott Alexan­der ar­gues that the word “lie” should be re­served for know­ingly-made false state­ments, and not used in an ex­panded sense that in­cludes un­con­scious mo­ti­vated rea­son­ing. Alexan­der ar­gues that the ex­panded sense draws the cat­e­gory bound­aries of “ly­ing” too widely in a way that would make the word less use­ful. The hy­poth­e­sis that pre­dicts ev­ery­thing pre­dicts noth­ing: in or­der for “Kevin lied” to mean some­thing, some pos­si­ble states-of-af­fairs need to be iden­ti­fied as not ly­ing, so that the state­ment “Kevin lied” can cor­re­spond to re­dis­tribut­ing con­served prob­a­bil­ity mass away from “not ly­ing” states-of-af­fairs onto “ly­ing” states-of-af­fairs.

All of this is en­tirely cor­rect. But Jes­sica Tay­lor (whose post “The AI Timelines Scam” in­spired “Against Lie In­fla­tion”) wasn’t ar­gu­ing that ev­ery­thing is ly­ing; she was just us­ing a more per­mis­sive con­cep­tion of ly­ing than the one Alexan­der prefers, such that Alexan­der didn’t think that Tay­lor’s defi­ni­tion could sta­bly and con­sis­tently iden­tify non-lies.

Con­cern­ing Alexan­der’s ar­gu­ments against the ex­panded defi­ni­tion, I find I have one strong ob­jec­tion (that ap­peal-to-con­se­quences is an in­valid form of rea­son­ing for op­ti­mal-cat­e­go­riza­tion ques­tions for es­sen­tially the same rea­son as it is for ques­tions of sim­ple fact), and one more spec­u­la­tive ob­jec­tion (that our in­tu­itive “folk the­ory” of ly­ing may ac­tu­ally be em­piri­cally mis­taken). Let me ex­plain.

(A small clar­ifi­ca­tion: for my­self, I no­tice that I also tend to frown on the ex­panded sense of “ly­ing”. But the rea­sons for frown­ing mat­ter! Peo­ple who su­perfi­cially agree on a con­clu­sion but for differ­ent rea­sons, are not re­ally on the same page!)

Ap­peals to Con­se­quences Are Invalid

There is no method of rea­son­ing more com­mon, and yet none more blam­able, than, in philo­soph­i­cal dis­putes, to en­deavor the re­fu­ta­tion of any hy­poth­e­sis, by a pre­tense of its dan­ger­ous con­se­quences[.]

David Hume

Alexan­der con­trasts the imag­ined con­se­quences of the ex­panded defi­ni­tion of “ly­ing” be­com­ing more widely ac­cepted, to a world that uses the re­stricted defi­ni­tion:

[E]very­one is much an­grier. In the re­stricted-defi­ni­tion world, a few peo­ple write posts sug­gest­ing that there may be bi­ases af­fect­ing the situ­a­tion. In the ex­panded-defi­ni­tion world, those same peo­ple write posts ac­cus­ing the other side of be­ing liars per­pe­trat­ing a fraud. I am will­ing to listen to peo­ple sug­gest­ing I might be bi­ased, but if some­one calls me a liar I’m go­ing to be pretty an­gry and go into defen­sive mode. I’ll be less likely to hear them out and ad­just my be­liefs, and more likely to try to at­tack them.

But this is an ap­peal to con­se­quences. Ap­peals to con­se­quences are in­valid be­cause they rep­re­sent a map–ter­ri­tory con­fu­sion, an at­tempt to op­ti­mize our de­scrip­tion of re­al­ity at the ex­pense of our abil­ity to de­scribe re­al­ity ac­cu­rately (which we need in or­der to ac­tu­ally op­ti­mize re­al­ity).

(Again, the ap­peal is still in­valid even if the con­clu­sion—in this case, that un­con­scious ra­tio­nal­iza­tion shouldn’t count as “ly­ing”—might be true for other rea­sons.)

Some as­piring epistemic ra­tio­nal­ists like to call this the “Li­tany of Tarski”. If Eli­jah is ly­ing (with re­spect to what­ever the op­ti­mal cat­e­gory bound­ary for “ly­ing” turns out to be ac­cord­ing to our stan­dard Bayesian philos­o­phy of lan­guage), then I de­sire to be­lieve that Eli­jah is ly­ing (with re­spect to the op­ti­mal cat­e­gory bound­ary ac­cord­ing to … &c.). If Eli­jah is not ly­ing (with re­spect to … &c.), then I de­sire to be­lieve that Eli­jah is not ly­ing.

If the one comes to me and says, “Eli­jah is not ly­ing; to sup­port this claim, I offer this-and-such ev­i­dence of his sincer­ity,” then this is right and proper, and I am ea­ger to ex­am­ine the ev­i­dence pre­sented.

If the one comes to me and says, “You should choose to define ly­ing such that Eli­jah is not ly­ing, be­cause if you said that he was ly­ing, then he might feel an­gry and defen­sive,” this is in­sane. The map is not the ter­ri­tory! If Eli­jah’s be­hav­ior is, in fact, de­cep­tive—if he says things that cause peo­ple who trust him to be worse at an­ti­ci­pat­ing their ex­pe­riences when he rea­son­ably could have avoided this—I can’t make his be­hav­ior not-de­cep­tive by chang­ing the mean­ings of words.

Now, I agree that it might very well em­piri­cally be the case that if I say that Eli­jah is ly­ing (where Eli­jah can hear me), he might get an­gry and defen­sive, which could have a va­ri­ety of nega­tive so­cial con­se­quences. But that’s not an ar­gu­ment for chang­ing the defi­ni­tion of ly­ing; that’s an ar­gu­ment that I have an in­cen­tive to lie about whether I think Eli­jah is ly­ing! (Though Glo­ma­riz­ing about whether I think he’s ly­ing might be an even bet­ter play.)

Alexan­der is con­cerned that peo­ple might strate­gi­cally equiv­o­cate be­tween differ­ent defi­ni­tions of “ly­ing” as an un­just so­cial at­tack against the in­no­cent, us­ing the clas­sic motte-and-bailey ma­neu­ver: first, ar­gue that some­one is “ly­ing (ex­panded defi­ni­tion)” (the motte), then switch to treat­ing them as if they were guilty of “ly­ing (re­stricted defi­ni­tion)” (the bailey) and hope no one no­tices.

So, I agree that this is a very real prob­lem. But it’s worth not­ing that the prob­lem of equiv­o­ca­tion be­tween differ­ent cat­e­gory bound­aries as­so­ci­ated with the same word ap­plies sym­met­ri­cally: if it’s pos­si­ble to use an ex­panded defi­ni­tion of a so­cially-dis­ap­proved cat­e­gory as the motte and a re­stricted defi­ni­tion as the bailey in an un­just at­tack against the in­no­cent, then it’s also pos­si­ble to use an ex­panded defi­ni­tion as the bailey and a re­stricted defi­ni­tion as the motte in an un­just defense of the guilty. Alexan­der writes:

The whole rea­son that re­brand­ing lesser sins as “ly­ing” is tempt­ing is be­cause ev­ery­one knows “ly­ing” refers to some­thing very bad.

Right—and con­versely, be­cause ev­ery­one knows that “ly­ing” refers to some­thing very bad, it’s tempt­ing to re­brand lies as lesser sins. Ruby Bloom ex­plains what this looks like in the wild:

I worked in a work­place where ly­ing was com­mon­place, con­scious, and sys­tem 2. Clients ask­ing if we could do some­thing were told “yes, we’ve already got that fea­ture (we hadn’t) and we already have sev­eral clients suc­cess­fully us­ing that (we hadn’t).” Others were in­vited to be part an “ex­ist­ing beta pro­gram” alongside oth­ers just like them (in fact, they would have been the very first). When I ob­jected, I was told “no one wants to be the first, so you have to say that.”

[...] I think they lie to them­selves that they’re not ly­ing (so that if you search their thoughts, they never think “I’m ly­ing”)[.]

If your in­ter­est in the philos­o­phy of lan­guage is pri­mar­ily to avoid be­ing blamed for things—per­haps be­cause you per­ceive that you live in a Hobbe­sian dystopia where the pri­mary func­tion of words is to elicit ac­tions, where the de­no­ta­tive struc­ture of lan­guage was eroded by poli­ti­cal pro­cesses long ago, and all that’s left is a stan­dard­ized list of ap­proved at­tacks—in that case, it makes perfect sense to worry about “lie in­fla­tion” but not about “lie defla­tion.” If de­scribing some­thing as “ly­ing” is pri­mar­ily a weapon, then ap­ply­ing ex­tra scrutiny to uses of that weapon is a wise arms-re­stric­tion treaty.

But if your in­ter­est in the philos­o­phy of lan­guage is to im­prove and re­fine the uniquely hu­man power of vibra­tory telepa­thy—to con­struct shared maps that re­flect the ter­ri­tory—if you’re in­ter­ested in re­veal­ing what kinds of de­cep­tion are ac­tu­ally hap­pen­ing, and why—

(in short, if you are an as­piring epistemic ra­tio­nal­ist)

—then the asym­met­ri­cal fear of false-pos­i­tive iden­ti­fi­ca­tions of “ly­ing” but not false-nega­tives—along with the fo­cus on “bad ac­tors”, “stigma­ti­za­tion”, “at­tacks”, &c.—just looks weird. What does that have to do with max­i­miz­ing the prob­a­bil­ity you as­sign to the right an­swer??

The Op­ti­mal Cat­e­go­riza­tion Depends on the Ac­tual Psy­chol­ogy of Deception

De­cep­tion
My life seems like it’s noth­ing but
De­cep­tion
A big charade

I never meant to lie to you
I swear it
I never meant to play those games

”De­cep­tion” by Jem and the Holograms

Even if the fear of rhetor­i­cal war­fare isn’t a le­gi­t­i­mate rea­son to avoid call­ing things lies (at least pri­vately), we’re still left with the main ob­jec­tion that “ly­ing” is a differ­ent thing from “ra­tio­nal­iz­ing” or “be­ing bi­ased”. Every­one is bi­ased in some way or an­other, but to lie is “[t]o give false in­for­ma­tion in­ten­tion­ally with in­tent to de­ceive.” Some­times it might make sense to use the word “lie” in a non­cen­tral sense, as when we speak of “ly­ing to one­self” or say “Oops, I lied” in re­ac­tion to be­ing cor­rected. But it’s im­por­tant that these senses be ex­plic­itly ac­knowl­edged as non­cen­tral and not con­flated with the cen­tral case of know­ingly speak­ing false­hood with in­tent to de­ceive—as Alexan­der says, con­flat­ing the two can only be to the benefit of ac­tual liars.

Why would any­one dis­agree with this ob­vi­ous or­di­nary view, if they weren’t try­ing to get away with the sneaky motte-and-bailey so­cial at­tack that Alexan­der is so wor­ried about?

Per­haps be­cause the or­di­nary view re­lies an im­plied the­ory of hu­man psy­chol­ogy that we have rea­son to be­lieve is false? What if con­scious in­tent to de­ceive is typ­i­cally ab­sent in the most com­mon cases of peo­ple say­ing things that (they would be ca­pa­ble of re­al­iz­ing upon be­ing pressed) they know not to be true? Alexan­der writes—

So how will peo­ple de­cide where to draw the line [if egre­gious mo­ti­vated rea­son­ing can count as “ly­ing”]? My guess is: in a place drawn by bias and mo­ti­vated rea­son­ing, same way they de­cide ev­ery­thing else. The out­group will be ly­ing liars, and the in­group will be de­cent peo­ple with or­di­nary hu­man failings.

But if the word “ly­ing” is to ac­tu­ally mean some­thing rather than just be­ing a weapon, then the in­group and the out­group can’t both be right. If sym­me­try con­sid­er­a­tions make us doubt that one group is re­ally that much more hon­est than the other, that would seem to im­ply that ei­ther both groups are com­posed of de­cent peo­ple with or­di­nary hu­man failings, or that both groups are com­posed of ly­ing liars. The first de­scrip­tion cer­tainly sounds nicer, but as as­piring epistemic ra­tio­nal­ists, we’re not al­lowed to care about which de­scrip­tions sound nice; we’re only al­lowed to care about which de­scrip­tions match re­al­ity.

And if all of the con­cepts available to us in our na­tive lan­guage fail to match re­al­ity in differ­ent ways, then we have a tough prob­lem that may re­quire us to in­no­vate.

The philoso­pher Rod­er­ick T. Long writes

Sup­pose I were to in­vent a new word, “za­xle­bax,” and define it as “a metal­lic sphere, like the Wash­ing­ton Mon­u­ment.” That’s the defi­ni­tion—”a metal­lic sphere, like the Wash­ing­ton Mon­u­ment.” In short, I build my ill-cho­sen ex­am­ple into the defi­ni­tion. Now some lin­guis­tic sub­group might start us­ing the term “za­xle­bax” as though it just meant “metal­lic sphere,” or as though it just meant “some­thing of the same kind as the Wash­ing­ton Mon­u­ment.” And that’s fine. But my defi­ni­tion in­cor­po­rates both, and thus con­ceals the false as­sump­tion that the Wash­ing­ton Mon­u­ment is a metal­lic sphere; any at­tempt to use the term “za­xle­bax,” mean­ing what I mean by it, in­volves the user in this false as­sump­tion.

If self-de­cep­tion is as ubiquitous in hu­man life as au­thors such as Robin Han­son ar­gue (and if you’re read­ing this blog, this should not be a new idea to you!), then the or­di­nary con­cept of “ly­ing” may ac­tu­ally be analo­gous to Long’s “za­xle­bax”: the stan­dard in­ten­sional defi­ni­tion (“speak­ing false­hood with con­scious in­tent to de­ceive”/​”a metal­lic sphere”) fails to match the most com­mon ex­ten­sional ex­am­ples that we want to use the word for (“peo­ple mo­ti­vat­edly say­ing con­ve­nient things with­out both­er­ing to check whether they’re true”/​”the Wash­ing­ton Mon­u­ment”).

Ar­gu­ing for this em­piri­cal the­sis about hu­man psy­chol­ogy is be­yond the scope of this post. But if we live in a suffi­ciently Han­so­nian world where the or­di­nary mean­ing of “ly­ing” fails to carve re­al­ity at the joints, then au­thors are faced with a tough choice: ei­ther be in­volved in the false as­sump­tions of the stan­dard be­lieved-to-be-cen­tral in­ten­sional defi­ni­tion, or be de­prived of the use of com­mon ex­pres­sive vo­cab­u­lary. As Ben Hoff­man points out in the com­ments to “Against Lie In­fla­tion”, an ear­lier Scott Alexan­der didn’t seem shy about call­ing peo­ple liars in his clas­sic 2014 post “In Fa­vor of Nice­ness, Com­mu­nity, and Civ­i­liza­tion”

Poli­ti­ci­ans lie, but not too much. Take the top story on Poli­ti­fact Fact Check to­day. Some Repub­li­can claimed his sup­pos­edly-mav­er­ick Demo­cratic op­po­nent ac­tu­ally voted with Obama’s eco­nomic poli­cies 97 per­cent of the time. Fact Check ex­plains that the statis­tic used was ac­tu­ally for all votes, not just eco­nomic votes, and that mem­bers of Congress typ­i­cally have to have >90% agree­ment with their pres­i­dent be­cause of the way par­ti­san poli­tics work. So it’s a lie, and is prop­erly listed as one. [bold­ing mine —ZMD] But it’s a lie based on slightly mis­in­ter­pret­ing a real statis­tic. He didn’t just to­tally make up a num­ber. He didn’t even just make up some­thing else, like “My op­po­nent per­son­ally helped de­sign most of Obama’s leg­is­la­tion”.

Was the poli­ti­cian con­sciously ly­ing? Or did he (or his staffer) ar­rive at the mis­in­ter­pre­ta­tion via un­con­scious mo­ti­vated rea­son­ing and then just not bother to scrupu­lously check whether the in­ter­pre­ta­tion was true? And how could Alexan­der know?

Given my cur­rent be­liefs about the psy­chol­ogy of de­cep­tion, I find my­self in­clined to reach for words like “mo­ti­vated”, “mis­lead­ing”, “dis­torted”, &c., and am more likely to frown at uses of “lie”, “fraud”, “scam”, &c. where in­tent is hard to es­tab­lish. But even while frown­ing in­ter­nally, I want to avoid tone-polic­ing peo­ple whose word-choice pro­ce­dures are cal­ibrated differ­ently from mine when I think I un­der­stand the struc­ture-in-the-world they’re try­ing to point to. In­sist­ing on re­plac­ing the six in­stances of the phrase “mal­i­cious lies” in “Nice­ness, Com­mu­nity, and Civ­i­liza­tion” with “mal­i­ciously-mo­ti­vated false be­lief” would just be worse writ­ing.

And I definitely don’t want to ex­cuse mo­ti­vated rea­son­ing as a mere or­di­nary hu­man failing for which some­one can’t be blamed! One of the key fea­tures that dis­t­in­guishes mo­ti­vated rea­son­ing from sim­ple mis­takes is the way that the former re­sponds to in­cen­tives (such as be­ing blamed). If the elephant in your brain thinks it can get away with ly­ing just by keep­ing con­scious-you in the dark, it should think again!