Not Technically Lying

I’m sorry I took so long to post this. My com­puter broke a lit­tle while ago. I promise this will be rele­vant later.

A sur­geon has to perform emer­gency surgery on a pa­tient. No painkil­lers of any kind are available. The sur­geon takes an in­ert sal­ine IV and hooks it up to the pa­tient, hop­ing that the illu­sion of ex­tra treat­ment will make the pa­tient more com­fortable. The pa­tient asks, “What’s in that?” The doc­tor has a few op­tions:

  1. “It’s a sal­ine IV. It shouldn’t do any­thing it­self, but if you be­lieve it’s a painkil­ler, it’ll make this less painful.

  2. “Mor­phine.”

  3. “The strongest painkil­ler I have.”

-The first ex­pla­na­tion is not only true, but max­i­mizes the pa­tient’s un­der­stand­ing of the world.
-The sec­ond is ob­vi­ously a lie, though, in this case, it is a lie with a clear in­tended pos­i­tive effect: if the pa­tient thinks he’s get­ting mor­phine, then, due to the placebo effect, there is a very real chance he will ex­pe­rience less sub­jec­tive pain.
-The third is, in a sense, both true and a lie. It is tech­ni­cally true. How­ever, it’s some­what ar­bi­trary; the doc­tor could have eas­ily have said “It’s the weak­est painkil­ler I have,” or “It’s the strongest seda­tive I have,” or any other num­ber of tech­ni­cally true but mis­lead­ing state­ments. This state­ment is clearly in­tended to mis­lead the hearer into think­ing it is a po­tent painkil­ler; it pro­motes false be­liefs while not quite be­ing a false state­ment. It’s Not Tech­ni­cally Ly­ing. It seems that it de­serves most, if not al­most all, the dis­ap­proval that ac­tu­ally ly­ing does; the truth does not save it. Be­cause lan­guage does not spec­ify sin­gle, clear mean­ings we can of­ten use lan­guage where the ob­vi­ous mean­ing is false and the non-ob­vi­ous true, in­ten­tion­ally pro­mot­ing false be­liefs with­out false state­ments.

Another, per­haps more prac­ti­cal ex­am­ple: the open­ing two sen­tences of this post. I have been mean­ing to write this for a cou­ple weeks, and have failed mostly due to akra­sia. My com­puter broke a few months ago. Both state­ments are tech­ni­cally true,1 but the im­plied “be­cause” is not just false, but com­pletely op­po­site the truth—it’s com­plex, but if my com­puter had not bro­ken, I would prob­a­bly never have writ­ten this post. I’ve cre­ated the im­pres­sion of a quasi-le­gi­t­i­mate ex­cuse with­out ac­tu­ally say­ing any­thing false, be­cause our con­ven­tional use of lan­guage filled in the gaps that would have been lies.

The dis­tinc­tion be­tween tel­ling some­one a false­hood with the in­ten­tion of pro­mot­ing false be­liefs and tel­ling them a truth with the in­ten­tion of pro­mot­ing false be­liefs seems ra­zor-thin. In gen­eral, you’re prob­a­bly not jus­tified in de­ceiv­ing some­one, but if you are jus­tified, I hardly see how one form of de­cep­tion is to­tally OK and the other is to­tally wrong. If, and I stress if, your pur­pose is jus­tified, it seems you should choose whichever will fulfill it more effec­tively. I’d imag­ine the bal­ance gen­er­ally fa­vors NTL, be­cause there are of­ten nega­tive con­se­quences as­so­ci­ated with lies, but I doubt that the bal­ance strictly fa­vors NTL; the above doc­tor hy­po­thet­i­cal is an ex­am­ple where the lie seems bet­ter than the truth (ab­sent malprac­tice con­cerns).

For what com­mon sen­ti­ment is worth, peo­ple of­ten see lit­tle dis­tinc­tion be­tween lies and NTLs. If I used my com­puter ex­cuse with a boss or pro­fes­sor, and she later found out my com­puter ac­tu­ally broke be­fore the pa­per was even as­signed, my say­ing, “Well, I didn’t claim there was a causal con­nec­tion; you made that leap your­self! I was tel­ling the truth (tech­ni­cally)!” is un­likely to fix the dam­age to her opinion of me. From the per­spec­tive of the listener, the two are about equally wrong. In­deed, at least in my ex­pe­rience, some listen­ers view NTL as worse be­cause you don’t even think you’re ly­ing to them.

Ly­ing does ad­mit­tedly have its own spe­cial prob­lems, though I think the big one, de­cep­tion of oth­ers, is clearly shared. There is the risk of lies beget­ting fur­ther lies, as the truth is en­tan­gled. This may be true, but it is un­clear how Not Tech­ni­cally Ly­ing re­solves this; if you are en­tirely hon­est, the mo­ment your claim is ques­tioned se­ri­ously, you ei­ther ad­mit you were mis­lead­ing some­one, or you have to con­tinue mis­lead­ing them in a very clever man­ner. If you were ac­tu­ally jus­tified in mis­lead­ing them, failing to do so does not ap­pear to be an effi­cient out­come. If you’re able to mis­lead them fur­ther, then you’ve fur­ther sep­a­rated their mind from re­al­ity, even if, had they re­ally un­der­stood what you said, you wouldn’t have. And, of course, there’s the risk that you will come to be­lieve your own lies, which is se­ri­ous.

Not Tech­ni­cally Ly­ing poses a few prob­lems that ly­ing does not. For one, if I fill in the bot­tom line and then fill in my premises with NTL’s, omit­ting or rephras­ing difficult facts, I can po­ten­tially cre­ate an ex­cel­lent ar­gu­ment, an in­ves­ti­ga­tion of which will show all my premises are true. If I lied, this could be spot­ted by fact-check­ing and my ar­gu­ment largely dis­missed as a re­sult. Depend­ing on the con­text (for ex­am­ple, if I know there are fact-check­ers) ei­ther one may be more effi­cient at con­found­ing the truth.

While it may be a risk that one be­lieves their own lies, if you are gen­er­ally hon­est, you will at least be aware when you are ly­ing, and it will likely be highly in­fre­quent. NTL, by con­trast, may be too cheap. If I lie about some­thing, I re­al­ize that I’m ly­ing and I feel bad that I have to. I may change my be­havi­our in the fu­ture to avoid that. I may re­al­ize that it re­flects poorly on me as a per­son. But if I don’t tech­ni­cally lie, well, hey! I’m still an hon­est, up­right per­son and I can thus jus­tify vis­ciously mis­lead­ing peo­ple be­cause at least I’m not tech­ni­cally dishon­est. I can eas­ily over­value the tech­ni­cal truth if I don’t worry about pro­mot­ing true be­liefs. Of course, this will vary by in­di­vi­d­ual; if you think ly­ing is gen­er­ally pretty much OK, you’re prob­a­bly doomed. You’d have to have a pretty se­ri­ous at­tach­ment to the truth. But if you have that at­tach­ment, NTL seems that much more dan­ger­ous.

I’m not try­ing to spell out a moral ar­gu­ment for why we should all lie; if any­thing, I’m spel­ling out an ar­gu­ment for why we shouldn’t all Not Tech­ni­cally Lie. Where one is im­moral, in most if not all cases, so is the other, though where one is jus­tified, the other is likely jus­tified as well, though per­haps not more jus­tified. If ly­ing is never jus­tified be­cause of its effect on the listener, then nei­ther is NTL. If ly­ing is never jus­tified be­cause of its effect on the speaker, well, NTL may or may not be jus­tified; its effects on the speaker don’t seem so good, ei­ther.

To tie this into AI (definitely not my field, so I’ll be quite brief), it seems a true su­per­in­tel­li­gence would be un­be­liev­ably good at pro­mot­ing false be­liefs with true state­ments if it re­ally un­der­stood the be­ings it was speak­ing to. Imag­ine how well a per­son could mis­lead you if they knew be­fore­hand ex­actly how you would in­ter­pret any state­ment they made. If our con­cern is the effect on the listener, rather than the effect on the speaker, this is a prob­lem to be con­cerned with. A Tech­ni­cally Hon­est AI could prob­a­bly get away with more de­cep­tion than we can imag­ine.

1-Ad­mit­tedly this de­pends on your value of a “lit­tle while,” but this is suffi­ciently sub­jec­tive that I find it rea­son­able to call both state­ments true.

As a foot­note, I re­al­ize that this topic has been done a lot, but I do not re­call see­ing this an­gle (or, ac­tu­ally, this dis­tinc­tion) dis­cussed; it’s always been truth vs. falsity, so hope­fully this is an in­ter­est­ing take on a thor­oughly worn sub­ject.