The Costly Coordination Mechanism of Common Knowledge

Re­cently some­one pointed out to me that there was no good canon­i­cal post that ex­plained the use of com­mon knowl­edge in so­ciety. Since I wanted to be able to link to such a post, I de­cided to try to write it.

The epistemic sta­tus of this post is that I hoped to provide an ex­pla­na­tion for a stan­dard, main­stream idea, in a con­crete way that could be broadly un­der­stood rather than in a math­e­mat­i­cal/​log­i­cal fash­ion, and so the defi­ni­tions should all be cor­rect, though the ex­am­ples in the lat­ter half are more spec­u­la­tive and likely con­tain some in­ac­cu­ra­cies.

Let’s start with a puz­zle. What do these three things have in com­mon?

  • Dic­ta­tor­ships all through his­tory have at­tempted to sup­press free­dom of the press and free­dom of speech. Why is this? Are they just very sen­si­tive? On the other side, the lead­ers of the En­light­en­ment fought for free­dom of speech, and would not budge an inch against this prin­ci­ple.

  • When two peo­ple are on a date and want to sleep with each other, the con­ver­sa­tion will of­ten move to­wards but never ex­plic­itly dis­cuss hav­ing sex. The two may dis­cuss go­ing back to the place of one of theirs, with a differ­ent ex­plicit rea­son dis­cussed (e.g. “to have a drink”), even if both want to have sex.

  • Through­out his­tory, com­mu­ni­ties have had re­li­gious rit­u­als that look very similar. Every­one in the village has to join in. There are repet­i­tive songs, repet­i­tive lec­tures on the same holy books, chant­ing to­gether. Why, of all the pos­si­ble com­mu­nity events (e.g. din­ner, par­ties, etc) is this the most com­mon type?

What these three things have in com­mon, is com­mon knowl­edge—or at least, the at­tempt to cre­ate it.

Be­fore I spell that out, we’ll take a brief look into game the­ory so that we have the lan­guage to de­scribe clearly what’s go­ing on. Then we’ll be able to see con­cretely in a bunch of ex­am­ples, how com­mon knowl­edge is nec­es­sary to un­der­stand and build in­sti­tu­tions.

Pri­soner’s Dilem­mas vs Co­or­di­na­tion Problems

To un­der­stand why com­mon knowl­edge is use­ful, I want to con­trast two types of situ­a­tions in game the­ory: Pri­soner’s Dilem­mas and Co­or­di­na­tion Prob­lems. They look similar at first glance, but their pay­off ma­tri­ces have im­por­tant differ­ences.

The Pri­soner’s Dilemma (PD)

You’ve prob­a­bly heard of it—two play­ers have the op­por­tu­nity to co­op­er­ate, or defect against each other, based on a story about two pris­on­ers be­ing offered a deal if they tes­tify against the other.

If they do noth­ing they will put them both away for a short time; if one of them snitches on the other, the snitch gets off free and the snitched gets a long sen­tence. How­ever if they both snitch they get pretty bad sen­tences (though nei­ther are as long as when only one snitches on the other).

In game the­ory, peo­ple of­ten like to draw lit­tle boxes that show two differ­ent peo­ple’s choices, and how much they like the out­come. Such a di­a­gram is called a de­ci­sion ma­trix, and the num­bers are called the play­ers’ pay­offs.

To de­scribe the Pri­soner’s Dilemma, be­low is a de­ci­sion ma­trix where Anne and Bob each have the same two choices, la­bel­led and . Th­ese are col­lo­quially called ‘co­op­er­ate’ and ‘defect’. Each box con­tains two num­bers, for Anne and Bob’s pay­offs re­spec­tively. ​

If the pris­oner ‘defects’ on his part­ner, this means he snitches, and if he ‘co­op­er­ates’ with his part­ner, he doesn’t snitch. They’d both pre­fer that both of them co­op­er­ate to both of them defect­ing , but each of them has an in­cen­tive to stab each other in the back to reap the most re­ward .

Do you see in the ma­trix how they both would pre­fer no snitch­ing to both snitch­ing, but they also have an in­cen­tive to stab each other in the back?

Real World Examples

Nu­clear disar­ma­ment is a pris­oner’s dilemma. Both the Soviet Union and the USS wanted to have nu­clear bombs while the op­po­nent doesn’t, but they’d prob­a­bly both pre­fer a world where no­body had bombs than a world where they were both point­ing mas­sive weapons at each oth­ers heads. Un­for­tu­nately in our world, we failed to solve the prob­lem, and ended up point­ing mas­sive weapons at each oth­ers’ heads for decades.

Mili­tary bud­get spend­ing more broadly can be a pris­oner’s dilemma. Sup­pose two neigh­bour­ing coun­tries are de­ter­min­ing how much to spend on the mil­i­tary. Well, they don’t want to go to war with each other, and so they’d each like to spend a small amount of money on their mil­i­tary, and spend the rest of the money on run­ning the coun­try—in­fras­truc­ture, health­care, etc. How­ever, if one coun­try spends a small amount and the other coun­try spends a lot, then the sec­ond coun­try can just walk in and take over the first. So, they both spend lots of money on the mil­i­tary with no in­ten­tion of us­ing it, just so the other one can’t take over.

Another pris­oner’s dilemma is ten­nis play­ers figur­ing out whether to take perfor­mance en­hanc­ing drugs. Nat­u­rally, they’d like to dope and the op­pos­ing player not, but they’d rather both not dope than both dope.

Free-Rider Problems

Did you no­tice how there are more than two ten­nis play­ers in the dop­ing situ­a­tion? When de­cid­ing whether to take drugs, not only do you have to worry about whether your op­po­nent in the match to­day will dope, but also whether your op­po­nent to­mor­row will, and the day af­ter, and so on. We’re all won­der­ing whether all of us will dope. In so­ciety there are loads of these scaled up ver­sions of the pris­oner’s dilemma.

For ex­am­ple, ac­cord­ing to many poli­ti­cal the­o­ries, ev­ery­one is bet­ter off if the gov­ern­ment takes some taxes and uses them to provide pub­lic goods (e.g. trans­porta­tion, mil­i­tary, hos­pi­tals). As a pop­u­la­tion, it’s in ev­ery­one’s in­ter­est if ev­ery­one co­op­er­ates, and takes a small per­sonal sac­ri­fice of wealth.

How­ever, if most peo­ple are do­ing it, you can defect, and this is great for you—you get the ad­van­tage of a gov­ern­ment pro­vid­ing pub­lic goods, and also you keep your own money. But if ev­ery­one defects, then no­body gets the im­por­tant pub­lic goods, and this is worse for each per­son than if they’d all co­op­er­ated.

Whether you’re two rob­bers, one of many ten­nis play­ers, or a whole coun­try fight­ing an­other coun­try, you will run into a pris­oner’s dilemma. In the scaled-up ver­sion, a per­son who defects while ev­ery­one else co­op­er­ates is known as a free-rider, and the scaled up pris­oner’s dilemma is called the free-rider prob­lem.

Co­or­di­na­tion Problems

With that un­der our belt, let’s look at a new de­ci­sion ma­trix. Can you iden­tify what’s im­por­tantly differ­ent about this ma­trix? Make a pre­dic­tion about how you think this will change the play­ers’ strate­gies.​

Don’t mix this up with the Pri­son­ers’ Dilemma—it’s quite differ­ent. In the PD, if you co­op­er­ate and I defect, I get 4. What’s im­por­tant about the new de­ci­sion-ma­trix, is that no­body has an in­cen­tive to back­stab! If you co­op­er­ate and I defect, I get zero, in­stead of four.

We all want the same thing. Both play­ers’ prefer­ence or­der­ing is:

So, you might be con­fused: Why is this a prob­lem at all? Why doesn’t ev­ery­one just pick C?

Let me give an ex­am­ple from Michael Chwe’s clas­sic book on the sub­ject Ra­tional Ri­tual: Cul­ture, Co­or­di­na­tion and Com­mon Knowl­edge.

Say you and I are co-work­ers who ride the same bus home. To­day the bus is com­pletely packed and some­how we get sep­a­rated. Be­cause you are stand­ing near the front door of the bus and I am near the back door, I catch a glimpse of you only at brief mo­ments. Be­fore we reach our usual stop, I no­tice a mu­tual ac­quain­tance, who yells from the side­walk, “Hey you two! Come join me for a drink!” Join­ing this ac­quain­tance would be nice, but we care mainly about each other’s com­pany. The bus doors open; sep­a­rated by the crowd, we must de­cide in­de­pen­dently whether to get off.

Say that when our ac­quain­tance yells out, I look for you but can­not find you; I’m not sure whether you no­tice her or not and thus de­cide to stay on the bus. How ex­actly does the com­mu­ni­ca­tion pro­cess fail? There are two pos­si­bil­ities. The first is sim­ply that you do not no­tice her; maybe you are asleep. The sec­ond is that you do in fact no­tice her. But I stay on the bus be­cause I don’t know whether you no­tice her or not. In this case we both know that our ac­quain­tance yel­led but I do not know that you know.

Suc­cess­ful com­mu­ni­ca­tion some­times is not sim­ply a mat­ter of whether a given mes­sage is re­ceived. It also de­pends on whether peo­ple are aware that other peo­ple also re­ceive it. In other words, it is not just about peo­ple’s knowl­edge of the mes­sage; it is also about peo­ple know­ing that other peo­ple know about it, the “meta­knowl­edge” of the mes­sage.

Say that when our ac­quain­tance yells, I see you raise your head and look around for me, but I’m not sure if you man­age to find me. Even though I know about the yell, and I know that you know since I see you look up, I still de­cide to stay on the bus be­cause I do not know that you know that I know. So just one “level” of meta­knowl­edge is not enough.

Tak­ing this fur­ther, one soon re­al­izes that ev­ery level of meta­knowl­edge is nec­es­sary: I must know about the yell, you must know, I must know that you know, you must know that I know, I must know that you know that I know, and so on; that is, the yell must be “com­mon knowl­edge.”

The term “com­mon knowl­edge” is used in many ways but here we stick to a pre­cise defi­ni­tion. We say that an event or fact is com­mon knowl­edge among a group of peo­ple if ev­ery­one knows it, ev­ery­one knows that ev­ery­one knows it, ev­ery­one knows that ev­ery­one knows that ev­ery­one knows it, and so on.

Two peo­ple can cre­ate these many lev­els of meta­knowl­edge sim­ply through eye con­tact: say that when our ac­quain­tance yells I am look­ing at you and you are look­ing at me, [and we ex­change a brief glance at our mu­tual friend and nod]. Thus I know you know about the yell, you know that I know that you know (you see me look­ing at you), and so on. If we do man­age to make eye con­tact, we get off the bus; com­mu­ni­ca­tion is suc­cess­ful.

Co­or­di­na­tion prob­lems are only ever prob­lems when ev­ery­one is cur­rently choos­ing , and we need to co­or­di­nate all choos­ing at the same time. To do that, we need com­mon knowl­edge.

(The spe­cific defi­ni­tion of com­mon knowl­edge (“I know that you know that I know that....”) is of­ten con­fus­ing, but for now the con­crete ex­am­ples be­low should help get a solid in­tu­ition for the idea.)

Com­pare you and I on the bus to the co­or­di­na­tion game pay­off ma­trix: If we both get off the train , we get to hang out with each other and spend some time with a mu­tual ac­quain­tance. If only one of us does, we both miss out on the op­por­tu­nity to hang out with each other (the thing we want least - or ). If nei­ther of us gets off the train, we get to hang out with each other, but in a less in­ter­est­ing way .

A Stable State

The rea­son that it’s a difficult co­or­di­na­tion prob­lem, is be­cause the state is an equil­ibrium state; nei­ther of us alone can im­prove it by get­ting off the bus—only if we’re able to co­or­di­nate us both get­ting off the bus does this work. You can think of it like a lo­cal op­ti­mum: if you take one step in any di­rec­tion (if any sin­gle one of us changes our ac­tions) we lose util­ity on net.

The name for such an equil­ibrium is taken from math­e­mat­i­cian John Nash (who the film A Beau­tiful Mind was based on), and is called a Nash equil­ibrium. Both and are Nash equil­ibria in a co­or­di­na­tion prob­lem. Can you see how many Nash equil­ibria there are in the Pri­soner’s Dilemma?

Solv­ing prob­lems and re­solv­ing dilemmas

A good way to con­trast co­or­di­na­tion prob­lems and free rider prob­lems is to think about these equil­ibrium states. In the free rider prob­lem, the situ­a­tion where ev­ery­one co­op­er­ates is not a Nash equil­ibrium—ev­ery­one is in­cen­tivised to defect while the oth­ers co­op­er­ate, and so oc­ca­sion­ally some peo­ple do. While the PD only has one Nash equil­ibrium how­ever, a co­or­di­na­tion prob­lem has got two! The challenge is mov­ing from the cur­rent one, to one we all pre­fer.

Free rider prob­lems are solved by cre­at­ing new in­cen­tives against defect­ing. For ex­am­ple, the gov­ern­ment pun­ishes you if you don’t pay your taxes. In sports, the prac­tice of dop­ing is pun­ished, and what’s more it’s made out to be dishon­ourable. Peo­ple tell sto­ries of the evil peo­ple that dope and how we all look down on them; even if you could dope and prob­a­bly get away with it, there’s no plau­si­ble de­ni­a­bil­ity in your mind—you know you’re be­ing a bad per­son and would be judged by ev­ery­one of your col­leagues.

Co­or­di­na­tion prob­lems can be solved by cre­at­ing such in­cen­tives, but they can also be solved just by im­prov­ing in­for­ma­tion flow. We’ll see that be­low.

Three Co­or­di­na­tion Problems

That situ­a­tion when you and I lock eyes, nod, and get off the bus? That’s hav­ing com­mon knowl­edge. It’s the con­fi­dence to take the step, be­cause you’re not wor­ried about what I might do. Be­cause you know I’m get­ting off the bus with you.

Now we’ve got a han­dle on what com­mon knowl­edge is, we can turn back to the three puz­zling phe­nom­ena from the be­gin­ning.

Dic­ta­tors and free­dom of speech

Dic­ta­tor­ships all through his­tory have at­tempted to sup­press free­dom of the press and free­dom of speech. Why is this? Are they just very sen­si­tive? On the other side, the lead­ers of the En­light­en­ment fought for free­dom of speech, and would not budge an inch against this prin­ci­ple.

Many peo­ple un­der a dic­ta­tor­ship want a rev­olu­tion—but re­bel­ling only makes sense if enough other peo­ple want to rebel. The peo­ple as a whole are much more pow­er­ful than the gov­ern­ment. But you alone won’t be any match for the lo­cal po­lice force. You have to know that the oth­ers are will­ing to rebel (as long as you rebel), and you have to know that they know that you’re will­ing to rebel.

Peo­ple in a dic­ta­tor­ship are all try­ing to move to a bet­ter nash equil­ibrium with­out go­ing via the cor­ners of the box (i.e. where some peo­ple rebel, but not enough, and then you have some pointless death in­stead of a rev­olu­tion).

That feel­ing of wor­ry­ing whether the peo­ple around you will sup­port you, if you at­tack the po­lice. That’s what it’s like not to have com­mon knowl­edge. When a dic­ta­tor gets ousted by the peo­ple, it’s of­ten in the form of a riot, be­cause you can see the other peo­ple around you who are poised on the brink of vi­o­lence. They can see you, and you all know that if you moved as one you might ac­com­plish some­thing. That’s the feel­ing of com­mon knowl­edge.

The dic­ta­tor is try­ing to sup­press the peo­ple’s abil­ity to cre­ate com­mon knowl­edge that jumps them straight to - and so they at­tempt to sup­press the news me­dia. Prevent­ing com­mon knowl­edge be­ing formed among the pop­u­lace means that large fac­tions can­not co­or­di­nate—this is a suc­cess­ful di­vide and con­quer strat­egy, and is why dic­ta­tors are able to lead with so lit­tle sup­port (of­ten <1% of the pop­u­la­tion).

Uncer­tainty in Romance

When two peo­ple are on a date and want to sleep with each other, the con­ver­sa­tion will of­ten move to­wards but never ex­plic­itly dis­cuss hav­ing sex. The two may dis­cuss go­ing back to the place of one of theirs, with a differ­ent ex­plicit rea­son dis­cussed (e.g. “to have a drink”), even if both want to have sex.

No­tice the differ­ence between

  • Walk­ing up to some­one cold at a bar and start­ing a conversation

  • Walk­ing up to some­one at a bar, af­ter you no­ticed them steal­ing glances at you

  • Walk­ing up to some­one at a bar, af­ter you glanced at them, they glanced at you, and your eyes locked

It’s eas­iest to ap­proach con­fi­dently in the last case, since you have clear ev­i­dence that you’re both at least in­ter­ested in a flir­ta­tious con­ver­sa­tion.

In dat­ing, get­ting ex­plic­itly re­jected is a loss of sta­tus, so peo­ple are in­cen­tivised to put a lot of effort into pre­serv­ing plau­si­ble de­ni­a­bil­ity. No re­ally, I just came up to your flat to listen to your vinyl records! Similarly, we know other peo­ple don’t like get­ting re­jected, so we rarely ex­plic­itly ask ei­ther. Are you try­ing to have sex with me?

So with sex, ro­mance, or even deep friend­ships, peo­ple are of­ten try­ing to get to with­out com­mon knowl­edge, up un­til the mo­ment that they’re both very con­fi­dent that both par­ties are in­ter­ested in rais­ing their level of in­ti­macy.

(Scott Alexan­der wrote about this at­tempt to avoid re­jec­tion and the con­fu­sion it en­tails in his post Con­ver­sa­tion De­liber­ately Sk­irts the Border of In­com­pre­hen­si­bil­ity.)

This prob­lem of avoid­ing com­mon knowl­edge as we try to move to bet­ter Nash equil­ibrium also shows up in ne­go­ti­a­tions and war, where you might make a threat, and not want there to be com­mon knowl­edge of whether you’ll ac­tu­ally fol­low through on that threat.

(Added: After listen­ing to a pod­cast with Robin Han­son, I re­al­ise that I’ve sim­plified too much here. It’s also the case that each mem­ber of the cou­ple might not have figured out whether they want to have sex, and so plau­si­ble de­ni­a­bil­ity gives them an out if they de­cide not to, with­out the ex­plicit sta­tus hit/​at­tack.

I definitely have the sense that if some­one very bluntly states sub­text when they no­tice it, this means I can’t play the game with them even if I wanted to, as when they state it ex­plic­itly I have to say “No!” else ad­mit that I was slightly flirt­ing /​ ex­plor­ing a ro­mance with them, and sig­nifi­cantly in­crease the change I will im­me­di­ately re­ceive an ex­plicit re­jec­tion.)

Com­mu­nal/​Reli­gious Rituals

Through­out his­tory, com­mu­ni­ties have had re­li­gious rit­u­als that look very similar. Every­one in the village has to join in. There are repet­i­tive songs, repet­i­tive lec­tures on the same holy books, chant­ing to­gether. Why, of all the pos­si­ble com­mu­nity events (e.g. din­ner, par­ties, etc) is this the most com­mon type?

Michael Chwe wrote a whole book on this topic. To sim­plify mas­sively: rit­u­als are a space to cre­ate com­mon knowl­edge in a com­mu­nity.

You don’t just listen to a pas­tor talk about virtue and sin. You listen to­gether, where you know that ev­ery­one else was listen­ing too. You say ‘amen’ to­gether af­ter each prayer the pas­tor speaks, and you all know that you’re listen­ing along and pay­ing at­ten­tion. You speak the Lord’s Prayer or some Bud­dhist chant to­gether, and you know that ev­ery­one knows the words.

Ri­tu­als cre­ate com­mon knowl­edge about what in the com­mu­nity is is re­warded, what is pun­ished. This is why re­li­gions are so pow­er­ful (and why the state likes to con­trol re­li­gion). It’s not just a part of life like other in­sti­tu­tions ev­ery­one uses like a mar­ket or a bank—this is an in­sti­tu­tion that builds com­mon knowl­edge about all ar­eas of life, es­pe­cially the most im­por­tant com­mu­nal norms.

To flesh out the pun­ish­ment part of that: When some­one does some­thing sin­ful by the stan­dards of the com­mu­nity, you know that they know they’re not sup­posed to, and they know that you know that they know. This makes it eas­ier to pun­ish peo­ple—they can’t claim they didn’t know they weren’t sup­posed to do some­thing. And mak­ing it eas­ier to pun­ish peo­ple also makes peo­ple less likely to sin in the first place.

The rit­u­als have been grad­u­ally im­proved and changed over time, and of­ten the trade-offs have been to­wards helping co­or­di­nate a com­mu­nity. This is why the words in the chants or songs that ev­ery­one sings are sim­ple, repet­i­tive, and of­ten rhyme—so you know that ev­ery­one knows ex­actly what they are. This is why rit­u­als of­ten oc­cur seated in a cir­cle—not only can you see the perfor­mance, but you can see me see­ing the perfor­mance, and I you, and we have com­mon knowl­edge.

Com­mon knowl­edge is of­ten much eas­ier to build in small groups—in the ex­am­ple about get­ting off the bus, the two need only to look at each other, share a nod, and com­mon knowl­edge is achieved. Build­ing com­mon knowl­edge be­tween hun­dreds or thou­sands of peo­ple is sig­nifi­cantly harder, and the fact that re­li­gion has such a sig­nifi­cant abil­ity to do so is why it has his­tor­i­cally had so much con­nec­tion to poli­tics.

Com­mon Knowl­edge Pro­duc­tion in So­ciety at Large

Com­mon knowl­edge is a very com­mon state of af­fairs that hu­mans had to rea­son about nat­u­rally in the an­ces­tral en­vi­ron­ment; there is no ex­plicit math­e­mat­i­cal calcu­la­tion be­ing done when two peo­ple lock eyes on a bus then co­or­di­nate get­ting off and see­ing their friend.

We’ve looked at how re­li­gions help cre­ate com­mon knowl­edge of norms. Here’s a few other com­mon knowl­edge pro­duc­ing mechanisms that ex­ist in the world to­day.

The News

The main way com­mon knowl­edge is built is by hav­ing ev­ery­one in the same room, in silence, while some­body speaks. Another way (in the mod­ern world) is offi­cial chan­nels of com­mu­ni­ca­tion that you know ev­ery­one listens to.

This is ac­tu­ally one of the good rea­sons to dis­cuss news so much—we’ve built trust that what the NYT says is com­mon knowl­edge, and so can co­or­di­nate around it. Some­times an offi­cial doc­u­ment is ad­ver­tised widely and is known to be known as com­mon knowl­edge, even if we our­selves of­ten haven’t read it (e.g. Will MacAskill’s book, the NYT).

Nowa­days there is no such sin­gle news source, and we’ve lost that co­or­di­na­tion mechanism. We all have Face­book, but Face­book is en­tirely built out of bub­bles. Face­book could choose to cre­ate com­mon knowl­edge by mak­ing some­thing ap­pear in ev­ery­one’s feed, but they choose not to (and this is in fact a fairly re­strained use of power that I ap­pre­ci­ate).

One time face­book slipped up on this, was when they built their ‘Marked Safe’ fea­ture. If a dan­ger­ous event (big fire, ter­ror­ist at­tack, earth­quake) hap­pened near you, you could ‘mark your­self safe’ and then all of your friends would get a no­tifi­ca­tion say­ing you were safe.

Now, it was clear that ev­ery­one else was see­ing the no­tifi­ca­tions you were see­ing, and so if your nearby friend marked them­selves safe and you didn’t, your friends would all no­tice that con­spicu­ous ab­sence of a no­tifi­ca­tion, and know that you had cho­sen not to click it. This cre­ates a pres­sure for all peo­ple to always no­tify their friends when­ever there’s been a dan­ger­ous event near them, even if the odds of them be­ing in­volved were minis­cule. This is a clear waste of time and at­ten­tion, and the fea­ture was re­moved the fea­ture con­tinues to be a piece of se­cu­rity the­atre in our lives.

A re­lated point about the power of me­dia that cre­ates com­mon knowl­edge: in Michael Chwe’s book, he does some data anal­y­sis of the mar­ket­ing strate­gies of mul­ti­ple differ­ent in­dus­tries. He clas­sifies prod­ucts that are ‘so­cial goods’ - those you want to buy if you ex­pect other peo­ple like them. For ex­am­ple, you want to buy wines that you know your guests like, or bring beer to par­ties that oth­ers like; you want to use pop­u­lar com­puter brands that peo­ple have de­vel­oped soft­ware for; etc.

He then shows that so­cial brands typ­i­cally pay more per viewer for ad­ver­tis­ing; not nec­es­sar­ily more to­tal, but that they’ll pay a higher amount for op­por­tu­ni­ties to broad­cast in places that gen­er­ate com­mon knowl­edge. Rather than buy 10 op­por­tu­ni­ties to broad­cast to 2 mil­lion peo­ple on var­i­ous chan­nels, they’ll pay a pre­mium for 20 mil­lion peo­ple to view their ad dur­ing the su­per­bowl, to cre­ate stronger com­mon knowl­edge.

Aca­demic Research

The cen­tral place where com­mon knowl­edge is gen­er­ated in sci­ence is in jour­nals. Th­ese are where re­searchers can dis­cover the new in­sights of the field, and build off them. Con­fer­ences can also help in this re­gard.

A more in­ter­est­ing case is text­books (I bor­row this ex­am­ple from Oliver Habryka). There was once a time in the his­tory of physics where the ba­sics of quan­tum me­chan­ics were known, and yet to study them re­quired read­ing the right jour­nal ar­ti­cles, in the right or­der. When you went to a con­ven­tion of physi­cists, you likely had to ex­plain many of the ba­sics of the field be­fore you could ex­press your new idea.

Then, some peo­ple de­cided to ag­gre­gate it into text­books, which were then all taught to the un­der­grad­u­ates of the next gen­er­a­tion, un­til the point where you could walk into the room and start us­ing all the jar­gon and trust that ev­ery­one knew what you meant. Hav­ing com­mon knowl­edge of the ba­sics of a field is nec­es­sary for a field to make progress—to make the 201 the 101, and then build new in­sights on top.

In my life, even if 90% of the peo­ple around have the idea, when I’m not con­fi­dent that 100% do then I of­ten ex­plain the ba­sic idea for ev­ery­one. This of­ten costs a lot of time—for ex­am­ple, af­ter you read this post, I’ll be able to say to you a sen­tence like ‘the un­der­grad text­book sys­tem is a mechanism to cre­ate the com­mon knowl­edge that al­lows the field as a whole to jump to the new Nash equil­ibrium of us­ing ad­vanced con­cepts’.

Para­graphs can be re­duced to sen­tences, and you can get even more pow­er­ful re­turns with more ab­stract ideas—in math­e­mat­ics, pages of sym­bols can be turned into a cou­ple of lines (with the right ab­strac­tions e.g. calcu­lus, lin­ear alge­bra, prob­a­bil­ity the­ory, etc).


A startup is a very small group of peo­ple build­ing de­tailed mod­els of a product. They’re able to cre­ate a lot of com­mon knowl­edge due to their small size. How­ever, one of the rea­sons why they need to put a lot of thought into the long-term of the com­pany, is be­cause they will lose this com­mon knowl­edge pro­duc­ing mechanism as they scale, and the only things they’ll be able to co­or­di­nate on are the things they already learned to­gether.

The fact that they’re able to build com­mon knowl­edge when they’re small is why they’re able to make so much more progress than big com­pa­nies, and is also why big com­pa­nies that in­no­vate tend to com­part­men­tal­ise their teams into small groups. As the com­pany grows, there are far fewer things that can be re­tained as com­mon knowl­edge amongst the em­ploy­ees. You can have in­ten­sive on-board­ing pro­cesses for the first 20 hires, but it re­ally doesn’t scale to 100 em­ploy­ees.

Here are three things that can sus­tain at very large scales:

Name: Y Com­bi­na­tor says that the name of your com­pany should tell peo­ple what you do—cf. AirBnb, In­staCart, DoorDash, OpenAI, Lyft, etc. Con­trast with com­pa­nies like Palan­tir, where even I don’t know ex­actly what they work on day-to-day, and I’ve got friends who work there.

Mis­sion: It is pos­si­ble to pre­dict the out­put of an or­gani­sa­tion very well by what their mis­sion state­ment con­cretely com­mu­ni­cates. For ex­am­ple, the com­pany SpaceX has their mis­sion state­ment at the top of all hiring doc­u­ments (cf. the ap­pli­ca­tion forms to be a rocket sci­en­tist, busi­ness an­a­lyst, or barista).

Values: Affects hiring and de­ci­sion-mak­ing long into the fu­ture. YC speci­fi­cally says to pick 4-8 core val­ues, have a story as­so­ci­ated with each value, and tell each story ev­ery day (e.g. in meet­ings). That may seem like way too much, but in fact that’s how much it can take to make the val­ues com­mon knowl­edge (es­pe­cially as your com­pany scales).

At what cost?

A stan­dard re­sponse to co­or­di­na­tion failures is one of ex­as­per­a­tion—a feel­ing that we should be able to solve this if only we tried.

Imag­ine you’re try­ing to co­or­di­nate you and a few friends to move some fur­ni­ture, and they keep get­ting in each other’s way. You might shout “Hey guys! Look, Pete and Lau­rie have to move the couch first, then John and Pauline can move the table!” And then things just start work­ing. Or even just be­tween two of you—when a friend is late for skype calls be­cause she messes up her cal­en­dar app, you might ex­press ir­ri­ta­tion, and she might try ex­tra hard to fix the prob­lem.

We also feel this when we look at so­ciety at large, for ex­am­ple when we look at co­or­di­na­tion failures in poli­tics. Why does ev­ery­one con­tinue vot­ing for silly-no-good poli­ti­ci­ans? Why can’t we all just vote for some­one sane?!

In the book Inad­e­quate Equil­ibria by Eliezer Yud­kowsky, the char­ac­ter Sim­pli­cio rep­re­sents this feel­ing. Here is the char­ac­ter dis­cussing a (real) co­or­di­na­tion failure in the US health­care sys­tem that causes a few dozen new­born chil­dren to die ev­ery year:

sim­pli­cio: The first thing you have to un­der­stand, Visi­tor, is that the folk in this world are hyp­ocrites, cow­ards, psy­chopaths, and sheep.

I mean, I cer­tainly care about the the lives of new­born chil­dren. Hear­ing about their plight cer­tainly makes me want to do some­thing about it. When I see the prob­lem con­tin­u­ing in spite of that, I can only con­clude that other peo­ple don’t feel the level of moral in­dig­na­tion that I feel when star­ing at a heap of dead ba­bies.


Re­gard­less, I’m not see­ing what the grand ob­sta­cle is to peo­ple solv­ing these prob­lems by, you know, co­or­di­nat­ing. If peo­ple would just act in unity, so much could be done!

I feel like you’re plac­ing too much blame on sys­tem-level is­sues, Ce­cie, when the sim­pler hy­poth­e­sis is just that the peo­ple in the sys­tem are ter­rible: bad at think­ing, bad at car­ing, bad at co­or­di­nat­ing. You claim to be a “cynic,” but your whole world-view sounds rose-tinted to me.

One of the fi­nal points to deeply un­der­stand about com­mon knowl­edge in so­ciety, is how costly it is to cre­ate at scale.

Big com­pa­nies get to pick only a few sen­tences to be­come com­mon knowl­edge. To have a com­mu­nity rally around a more com­plex set of val­ues and ideals (i.e. a sig­nifi­cant func­tion of re­li­gion) each and ev­ery mem­ber of that com­mu­nity must give up half of each Sun­day, to re­peat ideas they already know over and over—noth­ing new, just with the goal of cre­at­ing com­mon knowl­edge.

There used to be news pro­grammes ev­ery­body in a coun­try would tune in for. No­tice how the New York Times used to be some­thing peo­ple would read once per week or once per day, and dis­cuss it with friends, even though most of the info has no di­rect effect on their lives.

Our in­tu­itions were de­vel­oped for tribes of size 150 or less (cf. Dun­bar’s num­ber) and as such, our in­tu­itions around co­or­di­na­tion are of­ten ter­ribly off. Sim­pli­cio is some­one who has not no­ticed the cost of cre­at­ing com­mon knowl­edge at scale. He be­lieves that so­ciety could eas­ily vote for good poli­ti­ci­ans if only we co­or­di­nated, and be­cause we don’t he in­fers we must be stupid and/​or evil.

The feel­ing of in­dig­na­tion at peo­ple for failing to co­or­di­nate can be thought of as cre­at­ing an in­cen­tive to solve the co­or­di­na­tion prob­lem. I’m let­ting my skype part­ner know that I will pun­ish them if they fail again. But to­day, this feel­ing to­ward peo­ple for failing to co­or­di­nate is al­most always mis­guided.

Think of it this way: many small co­or­di­na­tion prob­lems are suffi­ciently small that you’ll solve them quickly; many co­or­di­na­tion prob­lems are suffi­ciently big that you have no chance of solv­ing them via nor­mal means, and you will feel in­dig­na­tion ev­ery time you no­tice them (e.g. think poli­tics/​twit­ter). Ba­si­cally, when you feel like be­ing in­dig­nant in the mod­ern world, 99% of the time it’s wasted mo­tion.

Sim­pli­cio’s in­tu­itions are a great fit for a hunter-gath­erer tribe. When he gets in­dig­nant, it would be pro­por­tional to the prob­lem, the prob­lem would get solved, and ev­ery­one would be happy. At a later point in the book Sim­pli­cio calls for poli­ti­cal rev­olu­tion—the sort of mechanism that works if you’re able to get ev­ery­one to gather in a sin­gle place.

The solu­tion to co­or­di­na­tion prob­lems at scale is much harder, and re­quires think­ing about in­cen­tives struc­tures and in­for­ma­tion flows rather than emo­tions di­rected at in­di­vi­d­u­als in your so­cial en­vi­ron­ment. Or in other words, build­ing a civ­i­liza­tion.

vis­i­tor: In­deed. Mov­ing from bad equil­ibria to bet­ter equil­ibria is the whole point of hav­ing a civ­i­liza­tion in the first place.

- Another char­ac­ter in Inad­e­quate Equil­ibria, by Eliezer Yudkowsky

So, what’s com­mon knowl­edge for?

Sum­mary of this post:

  1. A co­or­di­na­tion prob­lem is when ev­ery­one is tak­ing some ac­tion A, and we’d rather all be tak­ing ac­tion B, but it’s bad if we don’t all move to B at the same time.

  2. Com­mon knowl­edge is the name for the epistemic state we’re col­lec­tively in, when we know we can all start choos­ing ac­tion B—and trust ev­ery­one else to do the same.

  3. We’re in­tu­itively very good at nav­i­gat­ing such prob­lems when we’re in small groups (size < 150).

  4. We’re in­tu­itively very bad at nav­i­gat­ing such prob­lems in the mod­ern world, and re­quire the build­ing of new, microe­co­nomic in­tu­itions in or­der to build a suc­cess­ful so­ciety.

There is a great deal more sub­tlety to how com­mon knowl­edge gets built and prop­a­gates. This post has given but a glimpse through the lens of game-the­ory, and hope­fully you now see the light that this lens sheds on a great va­ri­ety of phe­nom­ena.

Links to ex­plore more on this sub­ject:

  • Moloch’s Toolbox (Inad­e­quate Equil­ibria, Ch 3) (link)

    • A guide to the ways our cur­rent in­sti­tu­tions fail to co­or­di­nate. Largely ap­ply­ing stan­dard microe­co­nomics, and a great post to read af­ter this one.

  • Med­i­ta­tions on Moloch (link)

    • An origi­nal idea about co­or­di­na­tion failures, which the above book chap­ter for­mal­ised. It’s a great post, and it’s good to fol­low the in­tel­lec­tual her­i­tage of ideas.

  • Ra­tional Ri­tual: Cul­ture, Co­or­di­na­tion and Com­mon Knowl­edge (link)

    • Solid book with lots of de­tail.

  • Scott Aaron­son on Com­mon Knowl­edge and Au­mann’s Agree­ment The­o­rem (link)

    • This post caused me to spend a bunch more time think­ing about these top­ics, though I find some of its ex­pla­na­tions rely on some­what ad­vanced tech­ni­cal knowl­edge, and I’m not sure I agree with the real-world ap­pli­ca­tions of Au­mann’s Agree­ment The­o­rem.

  • Scott Alexan­der’s se­quence on Game The­ory (link)

    • After writ­ing this post, I found Scott Alexan­der had also writ­ten about some of the ex­am­ples (es­pe­cially the dic­ta­tor­ship one) in de­tail 7 years ago (link).

  • An­drew Critch on ‘Un­rol­ling So­cial Me­tacog­ni­tion: Three lev­els of meta are not enough’ (link)

    • This is a great post go­ing into the de­tails of how my mod­el­ling of you mod­el­ling me mod­el­ling you… works in prac­tice. Highly recom­mended if the defi­ni­tion of com­mon knowl­edge pre­sented above seemed con­fus­ing.

Thanks to Ray­mond Arnold, Ja­cob Lager­ros and Oliver Habryka for ex­ten­sive feed­back and com­ments, and to Hadrien Pouget for proofread­ing an early draft. A fur­ther spe­cial men­tion to Ray­mond for point­ing out this term ought to be a stan­dard piece of ex­pert jar­gon in this com­mu­nity, and sug­gest­ing I write this post