The Costly Coordin­a­tion Mech­an­ism of Com­mon Knowledge

Re­cently someone poin­ted out to me that there was no good ca­non­ical post that ex­plained the concept of com­mon know­ledge in so­ci­ety. Since I wanted to be able to link to such a post, I de­cided to try to write it.

The epi­stemic status of this post is that I hoped to provide an ex­plan­a­tion for a stand­ard, main­stream concept, in a con­crete way that could be broadly un­der­stood rather than in a math­em­at­ical/​lo­gical fash­ion, and so the defin­i­tions should all be cor­rect, though the ex­amples in the lat­ter half are more spec­u­lat­ive and likely con­tain some in­ac­curacies.

Let’s start with a puzzle. What do these three things have in com­mon?

  • Dic­tat­or­ships all through his­tory have at­temp­ted to sup­press free­dom of the press and free­dom of speech. Why is this? Are they just very sens­it­ive? On the other side, the lead­ers of the En­light­en­ment fought for free­dom of speech, and would not budge an inch against this prin­ciple.

  • When two people are on a date and want to sleep with each other, the con­ver­sa­tion will of­ten move to­wards but never ex­pli­citly dis­cuss hav­ing sex. The two may dis­cuss go­ing back to the place of one of theirs, with a dif­fer­ent ex­pli­cit reason dis­cussed (e.g. “to have a drink”), even if both want to have sex.

  • Throughout his­tory, com­munit­ies have had re­li­gious rituals that look very sim­ilar. Every­one in the vil­lage has to join in. There are re­pet­it­ive songs, re­pet­it­ive lec­tures on the same holy books, chant­ing to­gether. Why, of all the pos­sible com­munity events (e.g. din­ner, parties, etc) is this the most com­mon type?

What these three things have in com­mon, is com­mon know­ledge—or at least, the at­tempt to cre­ate it.

Be­fore I spell that out, we’ll take a brief look into game the­ory so that we have the lan­guage to de­scribe clearly what’s go­ing on. Then we’ll be able to see con­cretely in a bunch of ex­amples, how com­mon know­ledge is ne­ces­sary to un­der­stand and build in­sti­tu­tions.

Table of Contents

  • Pris­oner’s Di­lem­mas vs Coordin­a­tion Problems

    • The Pris­oner’s Di­lemma (PD)

      • Real World Examples

      • Free-Rider Problems

    • Coordin­a­tion Problems

      • A St­able State

      • Solv­ing prob­lems and resolv­ing dilemmas

  • Three Coordin­a­tion Problems

    • Dic­tat­ors and Free­dom of Speech

    • Un­cer­tainty in Romance

    • Com­munal/​Re­li­gious Rituals

  • Com­mon Know­ledge Pro­duc­tion in So­ci­ety at Large

    • The News

    • Aca­demic Research

    • Startups

    • At What Cost?

  • (Sum­mary) So, What’s Com­mon Know­ledge For?

Pris­oner’s Di­lem­mas vs Coordin­a­tion Problems

To un­der­stand why com­mon know­ledge is use­ful, I want to con­trast two types of situ­ations in game the­ory: Pris­oner’s Di­lem­mas and Coordin­a­tion Prob­lems. They look sim­ilar at first glance, but their pay­off matrices have im­port­ant dif­fer­ences.

The Pris­oner’s Di­lemma (PD)

You’ve prob­ably heard of it—two play­ers have the op­por­tun­ity to co­oper­ate, or de­fect against each other, based on a story about two pris­on­ers be­ing offered a deal if they testify against the other.

If they do noth­ing they will put them both away for a short time; if one of them snitches on the other, the snitch gets off free and the snitched gets a long sen­tence. However if they both snitch they get pretty bad sen­tences (though neither are as long as when only one snitches on the other).

In game the­ory, people of­ten like to draw little boxes that show two dif­fer­ent people’s choices, and how much they like the out­come. Such a dia­gram is called a de­cision mat­rix, and the num­bers are called the play­ers’ pay­offs.

To de­scribe the Pris­oner’s Di­lemma, be­low is a de­cision mat­rix where Anne and Bob each have the same two choices, la­belled and . These are col­lo­qui­ally called ‘co­oper­ate’ and ‘de­fect’. Each box con­tains two num­bers, for Anne and Bob’s pay­offs re­spect­ively. ​

If the pris­oner ‘de­fects’ on his part­ner, this means he snitches, and if he ‘co­oper­ates’ with his part­ner, he doesn’t snitch. They’d both prefer that both of them co­oper­ate to both of them de­fect­ing , but each of them has an in­cent­ive to stab each other in the back to reap the most re­ward .

Do you see in the mat­rix how they both would prefer no snitching to both snitching, but they also have an in­cent­ive to stab each other in the back?

Real World Examples

Nuc­lear dis­arm­a­ment is a pris­oner’s di­lemma. Both the Soviet Union and the USS wanted to have nuc­lear bombs while the op­pon­ent doesn’t, but they’d prob­ably both prefer a world where nobody had bombs than a world where they were both point­ing massive weapons at each oth­ers heads. Un­for­tu­nately in our world, we failed to solve the prob­lem, and ended up point­ing massive weapons at each oth­ers’ heads for dec­ades.

Mil­it­ary budget spend­ing more broadly can be a pris­oner’s di­lemma. Sup­pose two neigh­bour­ing coun­tries are de­term­in­ing how much to spend on the mil­it­ary. Well, they don’t want to go to war with each other, and so they’d each like to spend a small amount of money on their mil­it­ary, and spend the rest of the money on run­ning the coun­try—in­fra­struc­ture, health­care, etc. However, if one coun­try spends a small amount and the other coun­try spends a lot, then the second coun­try can just walk in and take over the first. So, they both spend lots of money on the mil­it­ary with no in­ten­tion of us­ing it, just so the other one can’t take over.

Another pris­oner’s di­lemma is ten­nis play­ers fig­ur­ing out whether to take per­form­ance en­han­cing drugs. Nat­ur­ally, they’d like to dope and the op­pos­ing player not, but they’d rather both not dope than both dope.

Free-Rider Problems

Did you no­tice how there are more than two ten­nis play­ers in the dop­ing situ­ation? When de­cid­ing whether to take drugs, not only do you have to worry about whether your op­pon­ent in the match today will dope, but also whether your op­pon­ent to­mor­row will, and the day after, and so on. We’re all won­der­ing whether all of us will dope. In so­ci­ety there are loads of these scaled up ver­sions of the pris­oner’s di­lemma.

For ex­ample, ac­cord­ing to many polit­ical the­or­ies, every­one is bet­ter off if the gov­ern­ment takes some taxes and uses them to provide pub­lic goods (e.g. trans­port­a­tion, mil­it­ary, hos­pit­als). As a pop­u­la­tion, it’s in every­one’s in­terest if every­one co­oper­ates, and takes a small per­sonal sac­ri­fice of wealth.

However, if most people are do­ing it, you can de­fect, and this is great for you—you get the ad­vant­age of a gov­ern­ment provid­ing pub­lic goods, and also you keep your own money. But if every­one de­fects, then nobody gets the im­port­ant pub­lic goods, and this is worse for each per­son than if they’d all co­oper­ated.

Whether you’re two rob­bers, one of many ten­nis play­ers, or a whole coun­try fight­ing an­other coun­try, you will run into a pris­oner’s di­lemma. In the scaled-up ver­sion, a per­son who de­fects while every­one else co­oper­ates is known as a free-rider, and the scaled up pris­oner’s di­lemma is called the free-rider prob­lem.

Coordin­a­tion Problems

With that un­der our belt, let’s look at a new de­cision mat­rix. Can you identify what’s im­port­antly dif­fer­ent about this mat­rix? Make a pre­dic­tion about how you think this will change the play­ers’ strategies.​

Don’t mix this up with the Pris­on­ers’ Di­lemma—it’s quite dif­fer­ent. In the PD, if you co­oper­ate and I de­fect, I get 4. What’s im­port­ant about the new de­cision-mat­rix, is that nobody has an in­cent­ive to back­stab! If you co­oper­ate and I de­fect, I get zero, in­stead of four.

We all want the same thing. Both play­ers’ pref­er­ence or­der­ing is:

So, you might be con­fused: Why is this a prob­lem at all? Why doesn’t every­one just pick C?

Let me give an ex­ample from Mi­chael Chwe’s clas­sic book on the sub­ject Ra­tional Ritual: Cul­ture, Coordin­a­tion and Com­mon Know­ledge.

Say you and I are co-work­ers who ride the same bus home. Today the bus is com­pletely packed and some­how we get sep­ar­ated. Be­cause you are stand­ing near the front door of the bus and I am near the back door, I catch a glimpse of you only at brief mo­ments. Be­fore we reach our usual stop, I no­tice a mu­tual ac­quaint­ance, who yells from the side­walk, “Hey you two! Come join me for a drink!” Join­ing this ac­quaint­ance would be nice, but we care mainly about each other’s com­pany. The bus doors open; sep­ar­ated by the crowd, we must de­cide in­de­pend­ently whether to get off.
Say that when our ac­quaint­ance yells out, I look for you but can­not find you; I’m not sure whether you no­tice her or not and thus de­cide to stay on the bus. How ex­actly does the com­mu­nic­a­tion pro­cess fail? There are two pos­sib­il­it­ies. The first is simply that you do not no­tice her; maybe you are asleep. The second is that you do in fact no­tice her. But I stay on the bus be­cause I don’t know whether you no­tice her or not. In this case we both know that our ac­quaint­ance yelled but I do not know that you know.
Suc­cess­ful com­mu­nic­a­tion some­times is not simply a mat­ter of whether a given mes­sage is re­ceived. It also de­pends on whether people are aware that other people also re­ceive it. In other words, it is not just about people’s know­ledge of the mes­sage; it is also about people know­ing that other people know about it, the “meta­know­ledge” of the mes­sage.
Say that when our ac­quaint­ance yells, I see you raise your head and look around for me, but I’m not sure if you man­age to find me. Even though I know about the yell, and I know that you know since I see you look up, I still de­cide to stay on the bus be­cause I do not know that you know that I know. So just one “level” of meta­know­ledge is not enough.
Tak­ing this fur­ther, one soon real­izes that every level of meta­know­ledge is ne­ces­sary: I must know about the yell, you must know, I must know that you know, you must know that I know, I must know that you know that I know, and so on; that is, the yell must be “com­mon know­ledge.”
The term “com­mon know­ledge” is used in many ways but here we stick to a pre­cise defin­i­tion. We say that an event or fact is com­mon know­ledge among a group of people if every­one knows it, every­one knows that every­one knows it, every­one knows that every­one knows that every­one knows it, and so on.
Two people can cre­ate these many levels of meta­know­ledge simply through eye con­tact: say that when our ac­quaint­ance yells I am look­ing at you and you are look­ing at me, [and we ex­change a brief glance at our mu­tual friend and nod]. Thus I know you know about the yell, you know that I know that you know (you see me look­ing at you), and so on. If we do man­age to make eye con­tact, we get off the bus; com­mu­nic­a­tion is suc­cess­ful.

Coordin­a­tion prob­lems are only ever prob­lems when every­one is cur­rently choos­ing , and we need to co­ordin­ate all choos­ing at the same time. To do that, we need com­mon know­ledge.

(The spe­cific defin­i­tion of com­mon know­ledge (“I know that you know that I know that....“) is of­ten con­fus­ing, but for now the con­crete ex­amples be­low should help get a solid in­tu­ition for the idea.)

Com­pare you and I on the bus to the co­ordin­a­tion game pay­off mat­rix: If we both get off the train , we get to hang out with each other and spend some time with a mu­tual ac­quaint­ance. If only one of us does, we both miss out on the op­por­tun­ity to hang out with each other (the thing we want least - or ). If neither of us gets off the train, we get to hang out with each other, but in a less in­ter­est­ing way .

A St­able State

The reason that it’s a dif­fi­cult co­ordin­a­tion prob­lem, is be­cause the state is an equi­lib­rium state; neither of us alone can im­prove it by get­ting off the bus—only if we’re able to co­ordin­ate us both get­ting off the bus does this work. You can think of it like a local op­timum: if you take one step in any dir­ec­tion (if any single one of us changes our ac­tions) we lose util­ity on net.

The name for such an equi­lib­rium is taken from math­em­atician John Nash (who the film A Beau­ti­ful Mind was based on), and is called a Nash equi­lib­rium. Both and are Nash equi­lib­ria in a co­ordin­a­tion prob­lem. Can you see how many Nash equi­lib­ria there are in the Pris­oner’s Di­lemma?

Solv­ing prob­lems and resolv­ing dilemmas

A good way to con­trast co­ordin­a­tion prob­lems and free rider prob­lems is to think about these equi­lib­rium states. In the free rider prob­lem, the situ­ation where every­one co­oper­ates is not a Nash equi­lib­rium—every­one is in­centiv­ised to de­fect while the oth­ers co­oper­ate, and so oc­ca­sion­ally some people do. While the PD only has one Nash equi­lib­rium how­ever, a co­ordin­a­tion prob­lem has got two! The chal­lenge is mov­ing from the cur­rent one, to one we all prefer.

Free rider prob­lems are solved by cre­at­ing new in­cent­ives against de­fect­ing. For ex­ample, the gov­ern­ment pun­ishes you if you don’t pay your taxes. In sports, the prac­tice of dop­ing is pun­ished, and what’s more it’s made out to be dis­hon­our­able. People tell stor­ies of the evil people that dope and how we all look down on them; even if you could dope and prob­ably get away with it, there’s no plaus­ible deni­ab­il­ity in your mind—you know you’re be­ing a bad per­son and would be judged by every­one of your col­leagues.

Coordin­a­tion prob­lems can be solved by cre­at­ing such in­cent­ives, but they can also be solved just by im­prov­ing in­form­a­tion flow. We’ll see that be­low.

Three Coordin­a­tion Problems

That situ­ation when you and I lock eyes, nod, and get off the bus? That’s hav­ing com­mon know­ledge. It’s the con­fid­ence to take the step, be­cause you’re not wor­ried about what I might do. Be­cause you know I’m get­ting off the bus with you.

Now we’ve got a handle on what com­mon know­ledge is, we can turn back to the three puzz­ling phe­nom­ena from the be­gin­ning.

Dic­tat­ors and free­dom of speech

Dic­tat­or­ships all through his­tory have at­temp­ted to sup­press free­dom of the press and free­dom of speech. Why is this? Are they just very sens­it­ive? On the other side, the lead­ers of the En­light­en­ment fought for free­dom of speech, and would not budge an inch against this prin­ciple.

Many people un­der a dic­tat­or­ship want a re­volu­tion—but re­belling only makes sense if enough other people want to rebel. The people as a whole are much more power­ful than the gov­ern­ment. But you alone won’t be any match for the local po­lice force. You have to know that the oth­ers are will­ing to rebel (as long as you rebel), and you have to know that they know that you’re will­ing to rebel.

People in a dic­tat­or­ship are all try­ing to move to a bet­ter nash equi­lib­rium without go­ing via the corners of the box (i.e. where some people rebel, but not enough, and then you have some point­less death in­stead of a re­volu­tion).

That feel­ing of wor­ry­ing whether the people around you will sup­port you, if you at­tack the po­lice. That’s what it’s like not to have com­mon know­ledge. When a dic­tator gets ous­ted by the people, it’s of­ten in the form of a riot, be­cause you can see the other people around you who are poised on the brink of vi­ol­ence. They can see you, and you all know that if you moved as one you might ac­com­plish some­thing. That’s the feel­ing of com­mon know­ledge.

The dic­tator is try­ing to sup­press the people’s abil­ity to cre­ate com­mon know­ledge that jumps them straight to - and so they at­tempt to sup­press the news me­dia. Pre­vent­ing com­mon know­ledge be­ing formed among the popu­lace means that large fac­tions can­not co­ordin­ate—this is a suc­cess­ful di­vide and con­quer strategy, and is why dic­tat­ors are able to lead with so little sup­port (of­ten <1% of the pop­u­la­tion).

Un­cer­tainty in Romance

When two people are on a date and want to sleep with each other, the con­ver­sa­tion will of­ten move to­wards but never ex­pli­citly dis­cuss hav­ing sex. The two may dis­cuss go­ing back to the place of one of theirs, with a dif­fer­ent ex­pli­cit reason dis­cussed (e.g. “to have a drink”), even if both want to have sex.

Notice the dif­fer­ence between

  • Walk­ing up to someone cold at a bar and start­ing a conversation

  • Walk­ing up to someone at a bar, after you no­ticed them steal­ing glances at you

  • Walk­ing up to someone at a bar, after you glanced at them, they glanced at you, and your eyes locked

It’s easi­est to ap­proach con­fid­ently in the last case, since you have clear evid­ence that you’re both at least in­ter­ested in a flir­ta­tious con­ver­sa­tion.

In dat­ing, get­ting ex­pli­citly re­jec­ted is a loss of status, so people are in­centiv­ised to put a lot of ef­fort into pre­serving plaus­ible deni­ab­il­ity. No really, I just came up to your flat to listen to your vinyl re­cords! Sim­il­arly, we know other people don’t like get­ting re­jec­ted, so we rarely ex­pli­citly ask either. Are you try­ing to have sex with me?

So with sex, ro­mance, or even deep friend­ships, people are of­ten try­ing to get to without com­mon know­ledge, up un­til the mo­ment that they’re both very con­fid­ent that both parties are in­ter­ested in rais­ing their level of in­tim­acy.

(Scott Al­ex­an­der wrote about this at­tempt to avoid re­jec­tion and the con­fu­sion it en­tails in his post Con­ver­sa­tion Delib­er­ately Skirts the Border of In­com­pre­hens­ib­il­ity.)

This prob­lem of avoid­ing com­mon know­ledge as we try to move to bet­ter Nash equi­lib­rium also shows up in ne­go­ti­ations and war, where you might make a threat, and not want there to be com­mon know­ledge of whether you’ll ac­tu­ally fol­low through on that threat.

(Ad­ded: After listen­ing to a pod­cast with Robin Han­son, I real­ise that I’ve sim­pli­fied too much here. It’s also the case that each mem­ber of the couple might not have figured out whether they want to have sex, and so plaus­ible deni­ab­il­ity gives them an out if they de­cide not to, without the ex­pli­cit status hit/​at­tack.

I def­in­itely have the sense that if someone very bluntly states sub­text when they no­tice it, this means I can’t play the game with them even if I wanted to, as when they state it ex­pli­citly I have to say “No!” else ad­mit that I was slightly flirt­ing /​ ex­plor­ing a ro­mance with them, and sig­ni­fic­antly in­crease the change I will im­me­di­ately re­ceive an ex­pli­cit re­jec­tion.)

Com­munal/​Re­li­gious Rituals

Throughout his­tory, com­munit­ies have had re­li­gious rituals that look very sim­ilar. Every­one in the vil­lage has to join in. There are re­pet­it­ive songs, re­pet­it­ive lec­tures on the same holy books, chant­ing to­gether. Why, of all the pos­sible com­munity events (e.g. din­ner, parties, etc) is this the most com­mon type?

Mi­chael Chwe wrote a whole book on this topic. To sim­plify massively: rituals are a space to cre­ate com­mon know­ledge in a com­munity.

You don’t just listen to a pas­tor talk about vir­tue and sin. You listen to­gether, where you know that every­one else was listen­ing too. You say ‘amen’ to­gether after each prayer the pas­tor speaks, and you all know that you’re listen­ing along and pay­ing at­ten­tion. You speak the Lord’s Prayer or some Buddhist chant to­gether, and you know that every­one knows the words.

Rituals cre­ate com­mon know­ledge about what in the com­munity is is re­war­ded, what is pun­ished. This is why re­li­gions are so power­ful (and why the state likes to con­trol re­li­gions). It’s not part of life like other in­sti­tu­tions every­one uses like a mar­ket or a bank—this is an in­sti­tu­tion that builds com­mon know­ledge about all parts of life.

To flesh out the pun­ish­ment half of that: When someone does some­thing sin­ful by the stand­ards of the com­munity, you know that they know they’re not sup­posed to, and they know that you know that they know. This makes it easier to pun­ish people—they can’t claim they didn’t know they weren’t sup­posed to do some­thing. And mak­ing it easier to pun­ish people also makes people less likely to sin in the first place.

The rituals have been gradu­ally im­proved and changed over time, and of­ten the trade-offs have been to­wards help­ing co­ordin­ate a com­munity. This is why the words in the chants or songs that every­one sings are simple, re­pet­it­ive, and of­ten rhyme—so you know that every­one knows ex­actly what they are. This is why rituals of­ten oc­cur seated in a circle—not only can you see the per­form­ance, but you can see me see­ing the per­form­ance, and I you, and we have com­mon know­ledge.

Com­mon know­ledge is of­ten much easier to build in small groups—in the ex­ample about get­ting off the bus, the two need only to look at each other, share a nod, and com­mon know­ledge is achieved. Build­ing com­mon know­ledge between hun­dreds or thou­sands of people is sig­ni­fic­antly harder, and the fact that re­li­gion has such a sig­ni­fic­ant abil­ity to do so is why it has his­tor­ic­ally had so much con­nec­tion to polit­ics.

Com­mon Know­ledge Pro­duc­tion in So­ci­ety at Large

Com­mon know­ledge is a very com­mon state of af­fairs that hu­mans had to reason about nat­ur­ally in the an­ces­tral en­vir­on­ment; there is no ex­pli­cit math­em­at­ical cal­cu­la­tion be­ing done when two people lock eyes on a bus then co­ordin­ate get­ting off and see­ing their friend.

We’ve looked at how re­li­gions help cre­ate com­mon know­ledge of norms. Here’s a few other com­mon know­ledge pro­du­cing mech­an­isms that ex­ist in the world today.

The News

The main way com­mon know­ledge is built is by hav­ing every­one in the same room, in si­lence, while some­body speaks. Another way (in the mod­ern world) is of­fi­cial chan­nels of com­mu­nic­a­tion that you know every­one listens to.

This is ac­tu­ally one of the good reas­ons to dis­cuss news so much—we’ve built trust that what the NYT says is com­mon know­ledge, and so can co­ordin­ate around it. So­me­times an of­fi­cial doc­u­ment is ad­vert­ised widely and is known to be known as com­mon know­ledge, even if we ourselves of­ten haven’t read it (e.g. Will MacAskill’s book, the NYT).

Nowadays there is no such single news source, and we’ve lost that co­ordin­a­tion mech­an­ism. We all have Face­book, but Face­book is en­tirely built out of bubbles. Face­book could choose to cre­ate com­mon know­ledge by mak­ing some­thing ap­pear in every­one’s feed, but they choose not to (and this is in fact a fairly re­strained use of power that I ap­pre­ci­ate).

One time face­book slipped up on this, was when they built their ‘Marked Safe’ fea­ture. If a dan­ger­ous event (big fire, ter­ror­ist at­tack, earth­quake) happened near you, you could ‘mark your­self safe’ and then all of your friends would get a no­ti­fic­a­tion say­ing you were safe.

Now, it was clear that every­one else was see­ing the no­ti­fic­a­tions you were see­ing, and so if your nearby friend marked them­selves safe and you didn’t, your friends would all no­tice that con­spicu­ous ab­sence of a no­ti­fic­a­tion, and know that you had chosen not to click it. This cre­ates a pres­sure for all people to al­ways no­tify their friends whenever there’s been a dan­ger­ous event near them, even if the odds of them be­ing in­volved were min­is­cule. This is a clear waste of time and at­ten­tion, and the fea­ture was re­moved the fea­ture con­tin­ues to be a piece of se­cur­ity theatre in our lives.

A re­lated point about the power of me­dia that cre­ates com­mon know­ledge: in Mi­chael Chwe’s book, he does some data ana­lysis of the mar­ket­ing strategies of mul­tiple dif­fer­ent in­dus­tries. He clas­si­fies products that are ‘so­cial goods’ - those you want to buy if you ex­pect other people like them. For ex­ample, you want to buy wines that you know your guests like, or bring beer to parties that oth­ers like; you want to use pop­u­lar com­puter brands that people have de­veloped soft­ware for; etc.

He then shows that so­cial brands typ­ic­ally pay more per viewer for ad­vert­ising; not ne­ces­sar­ily more total, but that they’ll pay a higher amount for op­por­tun­it­ies to broad­cast in places that gen­er­ate com­mon know­ledge. Rather than buy 10 op­por­tun­it­ies to broad­cast to 2 mil­lion people on vari­ous chan­nels, they’ll pay a premium for 20 mil­lion people to view their ad dur­ing the su­per­bowl, to cre­ate stronger com­mon know­ledge.

Aca­demic Research

The cent­ral place where com­mon know­ledge is gen­er­ated in sci­ence is in journ­als. These are where re­search­ers can dis­cover the new in­sights of the field, and build off them. Con­fer­ences can also help in this re­gard.

A more in­ter­est­ing case is text­books (I bor­row this ex­ample from Oliver Habryka). There was once a time in the his­tory of phys­ics where the ba­sics of quantum mech­an­ics were known, and yet to study them re­quired read­ing the right journal art­icles, in the right or­der. When you went to a con­ven­tion of phys­i­cists, you likely had to ex­plain many of the ba­sics of the field be­fore you could ex­press your new idea.

Then, some people de­cided to ag­greg­ate it into text­books, which were then all taught to the un­der­gradu­ates of the next gen­er­a­tion, un­til the point where you could walk into the room and start us­ing all the jar­gon and trust that every­one knew what you meant. Hav­ing com­mon know­ledge of the ba­sics of a field is ne­ces­sary for a field to make pro­gress—to make the 201 the 101, and then build new in­sights on top.

In my life, even if 90% of the people around have the idea, when I’m not con­fid­ent that 100% do then I of­ten ex­plain the ba­sic idea for every­one. This of­ten costs a lot of time—for ex­ample, after you read this post, I’ll be able to say to you a sen­tence like ‘the un­der­grad text­book sys­tem is a mech­an­ism to cre­ate the com­mon know­ledge that al­lows the field as a whole to jump to the new Nash equi­lib­rium of us­ing ad­vanced con­cepts’.

Para­graphs can be re­duced to sen­tences, and you can get even more power­ful re­turns with more ab­stract ideas—in math­em­at­ics, pages of sym­bols can be turned into a couple of lines (with the right ab­strac­tions e.g. cal­cu­lus, lin­ear al­gebra, prob­ab­il­ity the­ory, etc).


A star­tup is a very small group of people build­ing de­tailed mod­els of a product. They’re able to cre­ate a lot of com­mon know­ledge due to their small size. However, one of the reas­ons why they need to put a lot of thought into the long-term of the com­pany, is be­cause they will lose this com­mon know­ledge pro­du­cing mech­an­ism as they scale, and the only things they’ll be able to co­ordin­ate on are the things they already learned to­gether.

The fact that they’re able to build com­mon know­ledge when they’re small is why they’re able to make so much more pro­gress than big com­pan­ies, and is also why big com­pan­ies that in­nov­ate tend to com­part­ment­al­ise their teams into small groups. As the com­pany grows, there are far fewer things that can be re­tained as com­mon know­ledge amongst the em­ploy­ees. You can have in­tens­ive on-board­ing pro­cesses for the first 20 hires, but it really doesn’t scale to 100 em­ploy­ees.

Here are three things that can sus­tain at very large scales:

Name: Y Com­bin­ator says that the name of your com­pany should tell people what you do—cf. AirBnb, In­staCart, DoorDash, OpenAI, Lyft, etc. Con­trast with com­pan­ies like Palantir, where even I don’t know ex­actly what they work on day-to-day, and I’ve got friends who work there.

Mis­sion: It is pos­sible to pre­dict the out­put of an or­gan­isa­tion very well by what their mis­sion state­ment con­cretely com­mu­nic­ates. For ex­ample, the com­pany SpaceX has their mis­sion state­ment at the top of all hir­ing doc­u­ments (cf. the ap­plic­a­tion forms to be a rocket sci­ent­ist, busi­ness ana­lyst, or barista).

Val­ues: Af­fects hir­ing and de­cision-mak­ing long into the fu­ture. YC spe­cific­ally says to pick 4-8 core val­ues, have a story as­so­ci­ated with each value, and tell each story every day (e.g. in meet­ings). That may seem like way too much, but in fact that’s how much it can take to make the val­ues com­mon know­ledge (es­pe­cially as your com­pany scales).

At what cost?

A stand­ard re­sponse to co­ordin­a­tion fail­ures is one of ex­as­per­a­tion—a feel­ing that we should be able to solve this if only we tried.

Ima­gine you’re try­ing to co­ordin­ate you and a few friends to move some fur­niture, and they keep get­ting in each other’s way. You might shout “Hey guys! Look, Pete and Laurie have to move the couch first, then John and Pau­line can move the table!” And then things just start work­ing. Or even just between two of you—when a friend is late for skype calls be­cause she messes up her cal­en­dar app, you might ex­press ir­rit­a­tion, and she might try ex­tra hard to fix the prob­lem.

We also feel this when we look at so­ci­ety at large, for ex­ample when we look at co­ordin­a­tion fail­ures in polit­ics. Why does every­one con­tinue vot­ing for silly-no-good politi­cians? Why can’t we all just vote for someone sane?!

In the book In­ad­equate Equi­lib­ria by Eliezer Yudkowsky, the char­ac­ter Sim­p­li­cio rep­res­ents this feel­ing. Here is the char­ac­ter dis­cuss­ing a (real) co­ordin­a­tion fail­ure in the US health­care sys­tem that causes a few dozen new­born chil­dren to die every year:

sim­p­li­cio: The first thing you have to un­der­stand, Vis­itor, is that the folk in this world are hy­po­crites, cow­ards, psy­cho­paths, and sheep.
I mean, I cer­tainly care about the the lives of new­born chil­dren. Hear­ing about their plight cer­tainly makes me want to do some­thing about it. When I see the prob­lem con­tinu­ing in spite of that, I can only con­clude that other people don’t feel the level of moral in­dig­na­tion that I feel when star­ing at a heap of dead ba­bies.
Regard­less, I’m not see­ing what the grand obstacle is to people solv­ing these prob­lems by, you know, co­ordin­at­ing. If people would just act in unity, so much could be done!
I feel like you’re pla­cing too much blame on sys­tem-level is­sues, Cecie, when the sim­pler hy­po­thesis is just that the people in the sys­tem are ter­rible: bad at think­ing, bad at caring, bad at co­ordin­at­ing. You claim to be a “cynic,” but your whole world-view sounds rose-tin­ted to me.

One of the fi­nal points to deeply un­der­stand about com­mon know­ledge in so­ci­ety, is how costly it is to cre­ate at scale.

Big com­pan­ies get to pick only a few sen­tences to be­come com­mon know­ledge. To have a com­munity rally around a more com­plex set of val­ues and ideals (i.e. a sig­ni­fic­ant func­tion of re­li­gion) each and every mem­ber of that com­munity must give up half of each Sunday, to re­peat ideas they already know over and over—noth­ing new, just with the goal of cre­at­ing com­mon know­ledge.

There used to be news pro­grammes every­body in a coun­try would tune in for. Notice how the New York Times used to be some­thing people would read once per week or once per day, and dis­cuss it with friends, even though most of the info has no dir­ect ef­fect on their lives.

Our in­tu­itions were de­veloped for tribes of size 150 or less (cf. dun­bar’s num­ber) and as such, our in­tu­itions around co­ordin­a­tion are of­ten ter­ribly off. Sim­p­li­cio is someone who has not no­ticed the cost of cre­at­ing com­mon know­ledge at scale. He be­lieves that so­ci­ety could eas­ily vote for good politi­cians if only we co­ordin­ated, and be­cause we don’t he in­fers we must be stu­pid and/​or evil.

The feel­ing of in­dig­na­tion at people for fail­ing to co­ordin­ate can be thought of as cre­at­ing an in­cent­ive to solve the co­ordin­a­tion prob­lem. I’m let­ting my skype part­ner know that I will pun­ish them if they fail again. But today, this feel­ing to­ward people for fail­ing to co­ordin­ate is al­most al­ways mis­guided.

Think of it this way: many small co­ordin­a­tion prob­lems are suf­fi­ciently small that you’ll solve them quickly; many co­ordin­a­tion prob­lems are suf­fi­ciently big that you have no chance of solv­ing them via nor­mal means, and you will feel in­dig­na­tion every time you no­tice them (e.g. think polit­ics/​twit­ter). Basic­ally, when you feel like be­ing in­dig­nant in the mod­ern world, 99% of the time it’s wasted mo­tion.

Sim­p­li­cio’s in­tu­itions are a great fit for a hunter-gatherer tribe. When he gets in­dig­nant, it would be pro­por­tional to the prob­lem, the prob­lem would get solved, and every­one would be happy. At a later point in the book Sim­p­li­cio calls for polit­ical re­volu­tion—the sort of mech­an­ism that works if you’re able to get every­one to gather in a single place.

The solu­tion to co­ordin­a­tion prob­lems at scale is much harder, and re­quires think­ing about in­cent­ives struc­tures and in­form­a­tion flows rather than emo­tions dir­ec­ted at in­di­vidu­als in your so­cial en­vir­on­ment. Or in other words, build­ing a civil­iz­a­tion.

vis­itor: Indeed. Mov­ing from bad equi­lib­ria to bet­ter equi­lib­ria is the whole point of hav­ing a civil­iz­a­tion in the first place.

- Another char­ac­ter in In­ad­equate Equi­lib­ria, by Eliezer Yudkowsky

So, what’s com­mon know­ledge for?

Sum­mary of this post:

  1. A co­ordin­a­tion prob­lem is when every­one is tak­ing some ac­tion A, and we’d rather all be tak­ing ac­tion B, but it’s bad if we don’t all move to B at the same time.

  2. Com­mon know­ledge is the name for the epi­stemic state we’re col­lect­ively in, when we know we can all start choos­ing ac­tion B—and trust every­one else to do the same.

  3. We’re in­tu­it­ively very good at nav­ig­at­ing such prob­lems when we’re in small groups (size < 150).

  4. We’re in­tu­it­ively very bad at nav­ig­at­ing such prob­lems in the mod­ern world, and re­quire the build­ing of new, mi­croe­co­nomic in­tu­itions in or­der to build a suc­cess­ful so­ci­ety.

There’s a bunch more sub­tlety to how com­mon know­ledge gets built and propag­ates; this post has given but a glimpse through the lens of game-the­ory, and hope­fully you see the light that this world­view sheds on a great vari­ety of phe­nom­ena.

Links to ex­plore more on this sub­ject:

  • Mo­loch’s Tool­box (In­ad­equate Equi­lib­ria, Ch 3) (link)

    • A guide to the ways our cur­rent in­sti­tu­tions fail to co­ordin­ate. Largely ap­ply­ing stand­ard mi­croe­co­nom­ics, and a great post to read after this one.

  • Med­it­a­tions on Mo­loch (link)

    • An ori­ginal idea about co­ordin­a­tion fail­ures, which the above book chapter form­al­ised. Is still a great post, and it’s good to see the in­tel­lec­tual her­it­age of ideas.

  • Ra­tional Ritual: Cul­ture, Coordin­a­tion and Com­mon Know­ledge (link)

    • Solid book with lots of de­tail.

  • Scott Aaron­son on Com­mon Know­ledge and Au­mann’s Agree­ment The­orem (link)

    • This post caused me to spend a bunch more time think­ing about these top­ics, and gives a bet­ter tech­nical un­der­stand­ing.

  • Scott Al­ex­an­der’s se­quence on Game The­ory (link)

    • After writ­ing this post, I found Scott had also writ­ten about some of the ex­amples (es­pe­cially the dic­tat­or­ship one) in de­tail 7 years ago (link).

My thanks to Ray Arnold and Ja­cob Lager­ros for ex­tens­ive feed­back and com­ments, to Had­rien Pou­get for proofread­ing an early draft, and to Oliver Habryka also for feed­back and (ex­cel­lent) cor­rect cri­ti­cism.

A fur­ther thanks to Ray for point­ing out this term ought to be a stand­ard piece of ex­pert jar­gon in this com­munity, and sug­gest­ing I write this post