Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models”

Derek Pow­ell, Kara Weis­man, and Ellen M. Mark­man’s “Ar­tic­u­lat­ing Lay The­o­ries Through Graph­i­cal Models: A Study of Beliefs Sur­round­ing Vac­ci­na­tion De­ci­sions” (a con­fer­ence pa­per from CogSci 2018) rep­re­sents an ex­cit­ing ad­vance in mar­ket­ing re­search, show­ing how to use causal graph­i­cal mod­els to study why or­di­nary peo­ple have the be­liefs they do, and how to in­ter­vene to make them be less wrong.

The spe­cific case our au­thors ex­am­ine is that of child­hood vac­ci­na­tion de­ci­sions: some par­ents don’t give their ba­bies the recom­mended vac­cines, be­cause they’re afraid that vac­cines cause autism. (Not true.) This is pretty bad—not only are those un­vac­ci­nated kids more likely to get sick them­selves, but de­clin­ing vac­ci­na­tion rates un­der­mine the pop­u­la­tion’s herd im­mu­nity, lead­ing to new out­breaks of highly-con­ta­gious dis­eases like the measles in re­gions where they were once erad­i­cated.

What’s wrong with these par­ents, huh?! But that doesn’t have to just be a rhetor­i­cal ques­tion—Pow­ell et al. show how we can use statis­tics to make the rhetor­i­cal hy­pophor­i­cal and model speci­fi­cally what’s wrong with these peo­ple! Real­is­ti­cally, peo­ple aren’t go­ing to just have a raw, “atomic” dis­like of vac­ci­na­tion for no rea­son: par­ents who re­fuse to vac­ci­nate their chil­dren do so be­cause they’re (ir­ra­tionally) afraid of giv­ing their kids autism, and not afraid enough of let­ting their kids get in­fec­tious dis­eases. Nor are be­liefs about vac­cine effec­tive­ness or side-effects un­caused, but in­stead de­pend on other be­liefs.

To un­ravel the struc­ture of the web of be­liefs, our au­thors got Ama­zon Me­chan­i­cal Turk par­ti­ci­pants to take sur­veys about vac­ci­na­tion-re­lated be­liefs, rat­ing state­ments like “Nat­u­ral things are always bet­ter than syn­thetic al­ter­na­tives” or “Par­ents should trust a doc­tor’s ad­vice even if it goes against their in­tu­itions” on a 7-point Lik­ert-like scale from “Strongly Agree” to “Strongly Disagree”.

Throw­ing some off-the-shelf Bayes-net struc­ture-learn­ing soft­ware at a train­ing set from the sur­vey data, plus some an­cillary as­sump­tions (more-gen­eral “the­ory” be­liefs like “skep­ti­cism of med­i­cal au­thor­i­ties” can cause more-spe­cific “claim” be­liefs like “vac­cines have harm­ful ad­di­tives”, but not vice versa) pro­duces a range of prob­a­bil­is­tic mod­els that can be de­picted with graphs where nodes rep­re­sent­ing the differ­ent be­liefs are con­nected by ar­rows that show which be­liefs “cause” oth­ers: an ar­row from a nat­u­ral­ism node (in this con­text, de­not­ing a wor­ld­view that prefers nat­u­ral over syn­thetic things) to a parental ex­per­tise node means that peo­ple think par­ents know best be­cause they think that na­ture is good, not the other way around.

Learn­ing these kinds of mod­els is fea­si­ble be­cause not all pos­si­ble causal re­la­tion­ships are con­sis­tent with the data: if and are statis­ti­cally in­de­pen­dent of each other, but each de­pen­dent with (and are con­di­tion­ally de­pen­dent given the value of ), it’s kind of hard to make sense of this ex­cept to posit that and are causes with the com­mon effect .

Sim­pler mod­els with fewer ar­rows might sac­ri­fice a lit­tle bit of pre­dic­tive ac­cu­racy for the benefit of be­ing more in­tel­ligible to hu­mans. Pow­ell et al. ended up choos­ing a model that can pre­dict re­sponses from the test set at r = .825, ex­plain­ing 68.1% of the var­i­ance. Not bad?!—check out the full 14-node graph in Figure 2 on page 4 of the PDF.

Causal graphs are use­ful as a guide for plan­ning in­ter­ven­tions: the graph en­codes pre­dic­tions about what would hap­pen if you changed some of the vari­ables. Our au­thors point out that since pre­vi­ous work showed that peo­ple’s be­liefs about vac­cine dan­gers were difficult to in­fluence, that sug­gests try­ing to in­ter­vene on the other par­ents of the in­tent-to-vac­ci­nate node in the model: if the hoi pol­loi won’t listen to you when you tell them the costs are min­i­mal (vac­cines are safe), in­stead tell them about the benefits (dis­eases are re­ally bad and vac­cines pre­vent dis­ease).

To make sure I re­ally un­der­stand this, I want to adapt it into a sim­pler ex­am­ple with made-up num­bers where I can do the ar­ith­metic my­self. Let me con­sider a graph with just three nodes—

vaccines are safe → vaccinate against measles ← measles are dangerous

Sup­pose this rep­re­sents a struc­tural equa­tion model where an anti-vaxxer-lean­ing par­ent-to-be’s propen­sity-to-vac­ci­nate-against-measles is ex­pressed in terms of be­lief-in-vac­cine-safety and be­lief-in-measles-dan­ger as—

And sup­pose that we’re a pub­lic health au­thor­ity try­ing to de­cide whether to spend our bud­get (or what’s left of it af­ter re­cent fund­ing cuts) on a pub­lic ed­u­ca­tion ini­ti­a­tive that will in­crease by 0.1, or one that will in­crease by 0.3.

We should choose the pro­gram that in­ter­venes on , be­cause is big­ger than . That’s ac­tion­able ad­vice that we couldn’t have de­rived with­out a quan­ti­ta­tive model of how the lay au­di­ence thinks. Ex­cit­ing!

At this point, some read­ers may be won­der­ing why I’ve de­scribed this work as “mar­ket­ing re­search” about con­struct­ing “op­ti­mized pro­pa­ganda.” A cou­ple of those words usu­ally have nega­tive con­no­ta­tions, but ed­u­cat­ing peo­ple about the im­por­tance of vac­cines is a pos­i­tive thing. What gives?

The thing is, “Learn the causal graph of why they think that and com­pute how to in­ter­vene on it to make them think some­thing else” is a sym­met­ric weapon—a fully gen­eral per­sua­sive tech­nique that doesn’t de­pend on whether the thing you’re try­ing to con­vince them of is true.

In my sim­plified ex­am­ple, the choice to in­ter­vene on was based on nu­mer­i­cal as­sump­tions that amount to the claim that it’s suffi­ciently eas­ier to change than it is to change , such that in­ter­ven­ing on is more effec­tive at chang­ing than in­ter­ven­ing on (even though de­pends on more than it does on ). But this method­ol­ogy is com­pletely in­differ­ent to what , , and mean. It would have worked just as well, and for the same rea­sons if the graph had been—

Coca-Cola isn't unhealthy → drink Coca-Cola ← Coca-Cola tastes great

Sup­pose that we’re ad­ver­tis­ing ex­ec­u­tives for the Coca-Cola Com­pany try­ing to de­cide how to spend our bud­get (or what’s left of it af­ter re­cent fund­ing cuts). If con­sumers won’t listen to us when we tell them the costs of drink­ing Coke are min­i­mal (ly­ing that it isn’t un­healthy), we should in­stead tell them about the benefits (Coke tastes good).

Or with differ­ent as­sump­tions about the pa­ram­e­ters—maybe ac­tu­ally—then in­ter­ven­ing to in­crease be­lief in “Coca-Cola isn’t un­healthy” would be the right move (be­cause ). The mar­ket­ing al­gorithm that just com­putes what be­lief changes will flip the de­ci­sion node, doesn’t have any way to no­tice or care whether those be­lief changes are in the di­rec­tion of more or less ac­cu­racy.

To be clear—and I re­ally shouldn’t have to say this—this is not a crit­i­cism of Pow­ell–Weis­man–Mark­man’s re­search! The “Learn the causal graph of why they think that” method­ol­ogy is gen­uinely re­ally cool! It doesn’t have to be de­ployed as a mar­ket­ing al­gorithm: the pro­cess of figur­ing out which be­lief change would flip some down­stream node is the same thing as what we call lo­cat­ing a crux.[1] The differ­ence is just a mat­ter of for­wards or back­wards di­rec­tion: whether you first figure out if the measles vac­cine or Coca-Cola are safe and then use what­ever an­swer you come up with to guide your de­ci­sion, or whether you write the bot­tom line first.

Of course, most peo­ple on most is­sues don’t have the time or ex­per­tise to do their own re­search. For the most part, we can only hope that the sources we trust as au­thor­i­ties are do­ing their best to use their limited band­width to keep us gen­uinely in­formed, rather than merely com­put­ing what sig­nals to emit in or­der to con­trol our de­ci­sions.

If that’s not true, we might be in trou­ble—per­haps in­creas­ingly so, if tech­nolog­i­cal de­vel­op­ments grant new ad­van­tages to the prop­a­ga­tion of dis­in­for­ma­tion over the dis­cern­ment of truth. In a pos­si­ble fu­ture world where most words are pro­duced by AIs run­ning a “Learn the causal graph of why they think that and in­ter­vene on it to make them think some­thing else” al­gorithm hooked up to a next-gen­er­a­tion GPT, even read­ing plain text from an un­trusted source could be dan­ger­ous.


  1. Thanks to Anna Sala­mon for this ob­ser­va­tion. ↩︎