Norm Innovation and Theory of Mind

Disclaimer: this was the first concept that led to me thinking about the coordination frontier. But I think something on the frame here feels subtly off. I decided to go ahead and post it – I’m pretty sure I believe all the words here. But not 100% sure this is the best way to think about the norm-negotiation problems.

Last post was about coordination schemes. Today’s post is about a subset of coordination schemes: norms, and norm enforcement.

The internet is full of people unilateral enforcing new norms on each other, often based on completely different worldviews. Many people have (rightly, IMO) developed a defensiveness to getting accused of things they don’t think are wrong.

Nonetheless, if society shall improve, it may be useful to invent (and enforce) new norms. What’s a good way to go about that?

Ideally, I think people discuss new norms with each other before starting to enforce them. Bring them up at town hall. Write a thoughtful essay and get people to critique it or discuss potential improvements.

But often, norm-conflict comes up suddenly and confusingly. Someone violates what you thought was a foundational norm of your social circle, and you casually say “hey, you just did X”. And they’re like “yeah?” and you’re flabbergasted that they’re just casually violating what you assumed was an obvious pillar of society.

This is tricky even in the best of circumstances. You thought you could rely on a group following Norm X, and then it turns out if you want Norm X you have to advocate it yourself.

It’s even more tricky when multiple people are trying to introduce new norms at once.

Multiplayer Norm Innovation

Imagine you have Alice, Bob, Charlie and Doofus, who all agree that you shouldn’t steal from or lie to the ingroup, and you shouldn’t murder anyone, ingroup or outgroup.

(Note the distinction between ingroups and outgroups, which matters quite a bit).

Alice, Bob, and Charlie also all agree that you should (ideally) aim to have a robust set of coordination meta-principles. But, they don’t know much about what that means. (Doofus has no such aspirations. Sorry about your name, Doofus, this essay is opinionated)

One day Alice comes to believe: “Not only should you not lie to the ingroup, you also shouldn’t use misleading arguments or cherry picked statistics to manipulate the ingroup.”

Around the same time, Bob comes to believe: “Not only should you not steal from the ingroup, you also shouldn’t steal from the outgroup.” Trade is much more valuable than stealing cattle. Bob begins trying to convince people of this using misleading arguments and bad statistics.

Alice tells Bob “Hey, you shouldn’t use misleading arguments to persuade the ingroup of things because it harms our ability to coordinate.”

This argument makes perfect sense to Alice.

The next day, Bob makes another misleading argument at the ingroup.

Alice says “What the hell, Bob?”

The day after that, Bob catches Alice stealing cattle from their rivals across the river, and says “What the hell, Alice, didn’t you read my blogpost on why outgroup-theft is bad?”

Someday, I would like to have a principled answer to the question “What is the best way for all of these characters to interact?” In this post, I’d like to focus on one aspect of why-the-problem is hard.

Disclaimer: This example probably doesn’t represent a coherent world. Clean examples be hard, yo.

Theory of Mind

The Sally Anne Marble test is a psychological tool for looking at how children develop theory-of-mind. A child is told a story about Sally and Anne. Sally has a marble. She puts it in her basket, and then leaves. While she’s away, her friend Anne takes the marble and hides it in another basket.

The child is asked “When Sally returns, where does she think her marble is?”

Very young children incorrectly answer “Sally will think the marble is in Anne’s basket.” The child-subject knows that Anne took the marble, and they don’t yet have the ability to model that Sally has different beliefs than they do.

Older children correctly answer the question. They have developed theory of mind.

“What the hell, Bob?”

When Alice says “what the hell, Bob?”, I think she’s (sometimes) failing a more advanced theory of mind test.

Alice knows she told Bob “Hey, you shouldn’t use misleading arguments to persuade the ingroup of things because it harms our ability to coordinate.” This seemed like a complete explanation. But she is mismodeling a) how many assumptions she swept under the rug, and b) how hard it is to learn a new concept in the first place.

Sometimes the failure is even worse than that. Maybe Alice told Bob the argument. But then she runs into Bob’s friend, Charlie, who is also making misleading arguments, and she doesn’t even think to check if Charlie has been exposed to the argument at all. And she gets mad at Charlie, and then Charlie gets frustrated for getting called out on a behavior he’s never even thought of before.

I’ve personally been the guy getting frustrated that nobody else is following “the obvious norms”, when I never even ever told someone the norm, let alone argued for it. It just seemed to obviously follow from my background information.

Assuming Logical Omniscience

There are several problems all feeding into each other here. The first several problems are variations on “Inferential distance is a way bigger deal than you think”, like:

  • Alice expects she can explain something once in 5 minutes and it should basically work. But, if you’re introducing a new way of thinking, it might take years to resolve a disagreement, because…

  • Alice’s claims are obvious to her within her model of the world. But, her frame might have lots of assumptions that aren’t obvious to others.

  • Alice may have initially explained her idea poorly, and Bob wrote her off as not-worth-listening to. (Idea Inoculation + Inferential Distance)

  • Alice has spent tons of time thinking about how bad it is to make misleading arguments, to the point where it feels obviously wrong and distasteful to her. Bob has not done that, and Alice is having a hard time modeling Bob. She keeps expecting that aesthetic distaste to be present, and relying on it to do some rhetorical work that it doesn’t do.

  • Much of this is also present in the other direction. Bob is really preoccupied with getting people to stop stealing things, it seems obviously really important since right now there’s an equilibrium where everyone is getting stolen from all the time. When Alice is arguing about being extra careful with arguments, Bob feels like she has a missing mood, like she doesn’t understand why the equilibrium of theft is urgent. And that is downstream of Bob similarly underestimating the inferential gulf about why stealing your rival’s cattle is limiting economic growth.

This all gets more complex when things have been going on for awhile. Alice and Bob both come to a (plausibly) reasonable belief that “Surely, I have made the case well enough that outgroup-theft/​misleading-arguents are bad.” They might even have reasonable evidence about this because people are making statements like “Theft is bad!” and “Misleading arguments are bad!”.

But, nonetheless, Alice has thought about Misleading Arguments a lot. She is very attuned to it, whereas everyone else has just started paying attention. She have begun thinking multiple steps beyond that – building entire edifices that take the initial claims as a basic axiom, exploring deep into the coordination frontier, along different directions. Bob is having a similar experience re: Theft.

So they are constantly seeing people take actions that look like straightforward defections to them, and look like defections they think other people have opted into being called on, but actually require additional inferential steps that are not yet common knowledge nor consensus.

Attention, Mistrust, and Stag Hunts

Meanwhile, another problem here is that, even if Bob and Alice take each other’s claims seriously, they might live in a world where lots of people are proposing norms.

Some of those norms are actively bad.

Some people are wielding norm-pushing as a weapon to gain social status or win political fights. (Even the people pushing good norms).

Some of the norms are good, but you can only prioritize so many new norms at once. Even people nominally on the same side may have different conceptions of what ingroup boundaries they are trying to draw, what standards they are trying to uphold, and whether a given degree of virtue is positive or negative for their ingroup.

People often model new norms as a stag hunt – if only we all pitched in to create a new societal expectation, we’d reap benefits from our collective action. Unfortunately, most stag hunts are actually schelling coordination games – the question is not “stag or no?”, it’s “which of the millions of stags are we even trying to kill?”

This all adds up to the unfortunate fact that the schelling choice is rabbit, not stag.

Attention resources are scarce. Not many people are paying attention to any given overton-window-fight. People get exhausted by having too many overton fights in a row. Within a single dispute, people have limited bandwidth before the cost of figuring out the optimal choice in the dispute doesn’t seem worth it.

So when someone shows up promoting a new norm, there’s a lot of genuine reason to be skeptical and react defensively.

Takeaways

This essay may seem kinda pessimistic about establishing new norms. But overall I think new norms are pretty important.

Once upon a time, we didn’t have norms against stealing from the outgroup. Over time, we somehow got that norm, and it allowed us to reap massive gains through trade. The story was obviously not nearly so simplistic as Bob. Maybe people started with some incidental trade, and the norm developed in fits and spurts after-the-fact. Maybe merchants (who stood to benefit from the norm) actively promoted it in a self-interested fashion. Or, maybe ancient civilizations handled this largely via redefining ingroups. But somehow or other we got from there to here.

Once upon a time, we didn’t even have statistics, let alone norms against misusing them to mislead people. Much of society is still statistically illiterate, so it’s a hard norm to apply in all contexts. Shared use of statistics is a coordination scheme, which civilization is still in the process of capitally-investing-in.

Part of the point of having intellectual communities is to get on the same page about novel ways we can defect on the epistemic commons. So that we can learn not to. So we can push the coordination frontier forward.

(Or, with a more positive spin: part of the point of dedicated communities is to develop new positive skills and habits we can gain, where we can benefit tremendously if lots of people in a network share those skills.)

But this is tricky, because people might have conceptual disagreements about what. (Among people who care about statistics, there are disagreements about how to use them properly. I recently observed an honest-to-goodness fight between a frequentist and bayesian that drove this point home)

Multiplayer Norm Pioneering is legitimately hard

If you’re the sort of person who’s proactively looking for better societal norms, you should expect to constantly be running into people not understanding you. The more steps you are beyond the coordination baseline, the less agreement with your policies you should expect.

If you’re in a community of people who are collectively trying to push the coordination frontier forward via new norms, you should expect to constantly be pushing it in different directions, resulting in misunderstandings. This can be a significant source of friction even when everyone involved is well intentioned, trying to cooperate. Part of that friction stems from the fact that we can’t reliably tell who is trying to cooperate in improving the culture, and who is trying to get away with stuff.

I have some sense that there are good practices that norm-pioneers can have that make it easier to interact with each other. Ideally, I think when people who are trying to push society forward run into conflict with each other, they have a set of tools where that conflict is resolved as efficiently as possible.

I have some thoughts on how to navigate all this. But each of my thoughts ended up looking suspiciously like “here’s a new norm”, and I was wary of muddling this meta-level post with object level arguments.

For now, I just want to leave people with the point that developing new norms creates inferential gaps. Efficient coordination generally requires people to be on the same page about what they’re coordinating on. It feels tractable to me to get on some meta-level cooperation among norm-pioneers, but exactly how to go about it feels like an unsolved problem.