Bayesianism and use of Evidence in Social Deduction Games

You look around the table at four friends—people who share your hatred for the evil empire, or so you thought. At this table, where the resistance meet to plan their missions, fully two of five the operatives are spies, infiltrating the rebels to sabotage their missions. You’ve seen your loyalty card, so you know you’re resistance… but how do you figure out which of your so-called allies are the spies?

The Resistance, like Werewolf, Mafia, Battlestar Galactica, and other social deduction games, tasks the majority of players with rooting out the spies in their midst—while the spies win by staying hidden. Among my friends, accusations of spyhood tend to be absolute: “Did you see how long he hesitated? He must be a spy!” Whether the suspicion is based on social cues or in-game actions, players rapidly become very sure of those beliefs they discuss at the table. They seem to divide their observations into two neat boxes, based on whether the data can decisively show someone’s identity. If evidence seems convincing, it becomes concrete proof, immune to discussion; and if it doesn’t, then it’s disregarded.

This treatment of evidence can lead to overconfidence: once when I was well-framed by the spies, my fellow resistance member refused to even imagine how I could be innocent. And why should he listen to me? He had evidence that I was a spy. On the other hand, it can just as easily lead to under-confidence: when new players see that there is no conclusive proof one way or the other, they often disregard the hints and suggestive evidence (in someone’s tone of voice, or their eagerness to go on a mission), and throw their hands up at the supposed randomness of the game.

Using Bayesianism as an alternative to this dichotomy allows me to treat evidence with the appropriate scrutiny, rather than using narrative ideas to guide my play. A two-person mission succeeds; the next mission adds a player to that team, and it fails. According to story logic, the first two players are trustworthy, so the third must have sabotaged the new mission. For more experienced players, the first mission is treated as having no informational value: spies may lay low, so any of the three players could be the saboteur, and it’s a 13 shot. According to Bayesianism, P(player 3 is a spy) is influenced by all available evidence, given proper weighting. How likely is it for a spy to lay low on the first mission? Who chose for player 3 to join the mission? What is player 3′s strategy as a spy? I find that this approach, of investigating all available evidence and updating my suspicions accordingly, allows me to have better precision in my accusations, and hopefully leads my teammates to start valuing evidence in the gradient way that these games, and investigation in life in general, requires for success.

I post this not only because I love playing Resistance (obviously!), but also because I think this game could be a fun and useful exercise in Bayesian reasoning, for the same reasons that Paranoid Debating may be: the group’s appraisal of the evidence needs to be accurate for the resistance to win, while it must be inaccurate for the spies to win. This encourages proper Bayesian technique among the resistance, and clever, bias-abusing rhetoric from the spies to twist the game in their favor.

If anyone would like to use this game at a LessWrong meetup, or as an activity run by the Center for Modern Rationality, all you need are the rules (here and here), a deck of playing cards, and the power of Bayes!

(Special thanks to Julia Galef, for thinking the game sounded like a fun idea for teaching Bayesianism)