Is fake news bullshit or lying?

New strate­gies for com­bat­ing misinformation

A layper­son-friendly view. Cross-posted from my per­sonal blog, First Prin­ci­ples.

Fake news is on the rise. We know this from Face­book shares, What­sApp for­wards, Twit­ter trolls, and Potemkin news sites. We see it in elec­tions across the world, novel coro­n­avirus guidance, and na­tion-state pos­tur­ing.

We’ve known about the is­sue for a while, and tech­nol­ogy com­pa­nies — in their role as the pri­mary dis­trib­u­tors — have taken ac­tion. This ac­tion has not stemmed the tide, and mean­while the tech­niques of mis­in­for­ma­tion evolve and pro­lifer­ate: bot armies and Deep Fakes be­ing only a few re­cent in­no­va­tions.

Why is it so difficult to define what fake news is? Why does call­ing out lies have lit­tle im­pact on those already de­ceived? And cru­cially, what can we do to re­store trust and rea­son to pub­lic and so­cial me­dia?

What is fake news?

Fake news is de­liber­ate, tar­geted, mis­in­for­ma­tion. It’s not nec­es­sar­ily wholly false: per­pe­tra­tors are as will­ing to uti­lize truths that fit their nar­ra­tive as they are to con­coct false­hoods to con­struct it. They’re nec­es­sar­ily in­differ­ent to the truth, and at­tached solely to the out­come: the be­liefs they wish to plant within the minds of their tar­gets. From Harry Frank­furt’s On Bul­lshit:

[The bul­lshit­ter’s] eye is not on the facts at all, as the eyes of the hon­est man and of the liar are, ex­cept in­so­far as they may be per­ti­nent to his in­ter­est in get­ting away with what he says. He does not care whether the things he says de­scribe re­al­ity cor­rectly. He just picks them out, or makes them up, to suit his pur­pose.

Fake news, there­fore, isn’t ly­ing, but bul­lshit.

Why is fake news so hard to fight?

Fake news, be­ing an ex­ten­sion of bul­lshit, in­her­its much of its traits:

It’s hard to refute

The claims within bul­lshit are many, neb­u­lous, and of­ten not even wrong. Fact-check­ing alone isn’t ad­e­quate to this task, and nei­ther is re­li­ance upon a trust­wor­thy set of sources.

It’s normalized

As Frank­furt points out:

One of the most salient fea­tures of our cul­ture is that there is so much bul­lshit. Every­one knows this. […] The realms of ad­ver­tis­ing and of pub­lic re­la­tions, and the nowa­days closely re­lated realm of poli­tics, are re­plete with in­stances of bul­lshit so un­miti­gated that they can serve among the most in­dis­putable and clas­sic paradigms of the con­cept.

It’s hard to regulate

If bul­lshit is hard to char­ac­ter­ize, it’s harder to legally define. By virtue of ei­ther not even be­ing wrong or out­landishly so, fake news can take ad­van­tage of free­dom of speech pro­tec­tions for par­ody and satire. In any spe­cific case, the per­pe­tra­tors may be elu­sive, not within the same le­gal ju­ris­dic­tion as the vic­tims, or, if ev­ery con­tent re­post is counted, too many in num­ber to sue.

What will it take?

Effec­tively coun­ter­ing mis­in­for­ma­tion re­quires a sea change in how jour­nal­is­tic me­dia en­gages with fake news’ mis­lead­ing nar­ra­tives, and in the met­rics by which con­tent dis­trib­u­tors value and in­cen­tivize ac­tivity on their plat­forms.

Preempt the narrative

Fact-check­ing is jour­nal­ists’ prime weapon against fake news, and fact-check­ing tools have right­fully pro­lifer­ated and are even sur­faced alongside sus­pect ma­te­rial by con­tent dis­trib­u­tors; but fact-check­ing alone is in­effec­tive at chang­ing minds, and is at best a re­ac­tive and ar­du­ous ac­tivity that can only ver­ify a tiny frac­tion of pub­li­cized claims.

In­stead, con­tent cre­ators and jour­nal­is­tic me­dia must track fake news with the aim of an­ti­ci­pat­ing the in­tended post-truth nar­ra­tives, and pro­mote coun­ter­vailing, fact-based nar­ra­tives in­stead. This nar­ra­tive-bust­ing con­nects with dis­parate au­di­ences in ways most mean­ingful to each, but with­out for­go­ing jour­nal­is­tic neu­tral­ity. From UNESCO’s jour­nal­ism hand­book on fake news:

The core com­po­nents of pro­fes­sional jour­nal­is­tic prac­tice […] can be fulfilled in a range of jour­nal­is­tic styles and sto­ries, each em­body­ing differ­ent nar­ra­tives that in turn are based on differ­ent val­ues and vary­ing per­spec­tives of fair­ness, con­tex­tu­al­ity, rele­vant facts, etc.

Jour­nal­ists must in­ti­mately un­der­stand their au­di­ence to hon­estly and con­fi­dently con­vey these nar­ra­tives. Tech­niques of causal cor­rec­tion and moral re­fram­ing have been shown to be effec­tive in con­vey­ing fac­tual in­for­ma­tion, e.g.

Say­ing “the sen­a­tor de­nies he is re­sign­ing be­cause of a bribery in­ves­ti­ga­tion” is not that effec­tive, even with good ev­i­dence that that’s the truth.
More effec­tive would look some­thing like this: “the sen­a­tor de­nies he is re­sign­ing be­cause of a bribery in­ves­ti­ga­tion. In­stead, he said he is be­com­ing the pres­i­dent of a uni­ver­sity.”

Jour­nal­is­tic neu­tral­ity has come to mean cater­ing to a sin­gle — mostly mod­er­ately liberal — au­di­ence, but at the ex­pense of the touch­points of un­der­stand­ing that ap­pealed to large swathes of the pop­u­la­tion. To counter mis­lead­ing and po­lariz­ing nar­ra­tives, al­ter­na­tives grounded in re­al­ity must be trans­lated to the value and be­lief sys­tems of di­verse peo­ples.

In­cen­tivize deliberation

Shar­ing is easy and uniform across all con­tent, but not all en­gage­ment is cre­ated equal. Tech­nol­ogy com­pa­nies must rec­og­nize that slow­ing down some kinds of en­gage­ment leads to higher-qual­ity con­tent and bet­ter share­holder value.

Like jour­nal­is­tic me­dia, con­tent dis­trib­u­tors have re­lied on fact-check­ing, with Face­book, YouTube, and Twit­ter tag­ging sus­pected mis­in­for­ma­tion. This has been ap­plied spar­ingly and with mixed re­sults, and also ex­ac­er­bated the prob­lem by im­ply­ing that un­tagged con­tent is ver­ified to be true. And even on this flagged sub­set of con­tent, shar­ing and cross-post­ing re­mains fric­tion­less.

Block­ing the shar­ing of any con­tent out­right is un­de­sir­able, and raises is­sues of cen­sor­ship and free ex­pres­sion. How­ever, tech­nol­ogy com­pa­nies can build in fea­tures in­cen­tiviz­ing users to re­flect on prob­le­matic con­tent prior to shar­ing and im­prov­ing the qual­ity of en­su­ing dis­cus­sions.

When a user shares flagged con­tent, plat­form fea­tures can po­ten­tially en­force adding ac­com­pa­ny­ing com­ments of a min­i­mum length and com­plex­ity to en­courage de­liber­a­tion, or an­swer­ing a quick IMVAIN sur­vey to crowd­source its re­li­a­bil­ity. Th­ese need not be manda­tory, but dis­in­cen­tives can be ap­plied by in­di­cat­ing to sub­se­quent view­ers in­stances where the user de­clined to com­ment on or ver­ify the post.

Such mea­sures can be differ­en­tially ap­plied, and dis­trib­u­tors have already demon­strated this abil­ity in au­to­mat­i­cally flag­ging and pri­ori­tiz­ing con­tent for fact-check­ing. By treat­ing con­tent that is new, un­ver­ified, or sus­pected of be­ing mis­lead­ing uniformly across the spec­tra of poli­tics and val­ues, plat­forms can pro­cess larger tracts of con­tent, im­prove con­tent qual­ity, and sidestep bias.


Fake news is bul­lshit: hard to pin down, re­fute, and reg­u­late. Coun­ter­ing mis­in­for­ma­tion re­quires jour­nal­ists to pro­mote fac­tual nar­ra­tives by en­gag­ing over­looked au­di­ences with causal and moral re­fram­ing, and con­tent dis­trib­u­tors to in­cen­tivize de­liber­a­tion and crowd­source re­li­a­bil­ity, dis­cour­age un­crit­i­cal re­post­ing of sus­pect con­tent, and de­velop en­gage­ment met­rics that differ­en­ti­ate for qual­ity ac­tivity.