I made a card game to reduce cognitive biases and logical fallacies but I’m not sure what DV to test in a study on its effectiveness.

First, how and why I made this game

I spent several years working as the chief product officer of a software company in Melbourne, and during this time, there was a point where I raised some money for the company with the aim of using that money to double the size of the company, and hire more engineers. During this period, as we were growing faster and more and more cross functional teams came online, there was this behaviour I noticed emerging where I would get asked to chime in on things—should we build this thing this way or that—and, as was my sentiment at the time, I believe that decisions were best made closer to the action. I was not close. So, I wanted to encourage people to make decisions on their own.

So eventually I made this proclamation. I said, you can make whatever decisions you want moving forward, so long as they have the highest probability of getting us to our goals. The actual details of this were a bit more nuanced than that, but generally speaking that was the picture.

Someone asked how do we know what we’re doing has the highest probability of reaching our goals?

I said I didn’t know. But I’d find out.

At the time, I hired this computational neuroscientist named Brian Oakley who had completed his PhD a few years early on communication between visual cortical areas. He was very clever, and had a tendency to come up with answers to things I thought were relatively unanswerable. So I asked him…

Would it be possible to start to measure decision quality in the organisation?

He said he didn’t know. But he’d find out.

What Brian went on to do, and subsequently, became the focus of a bunch of decision intelligence consulting I ended up doing as a part time job while I went back to university to study neuropsychology, motivated me to try and think of ways to improve decision making—particularly in cross functional teams, a space I knew well—in a way that wasn’t overly complex.

I’d been a fan of this group of individuals called Management 3.0 who had designed these very simple and accessible card games which sort of turned management training from these long, three day seminars where people generally forgot the content days later, into tactical games that could be run in 20 minutes. They have one game called delegation poker, which is a way to really fine tune who can decide what in a relationship between a manager and an employee. It helps clarify the undefinable, and I had started to wonder if it might be possible to build a game that could do that for decision quality.

Decision science is a very broad topic indeed, and so when I started working on this idea, I knew it would be impossible to cover everything. I had come up with a few ideas, and Brian had given me some thoughts on how he thought it could be improved. What I ended up with was focusing on a simple aspect of decision making, and where all good (and poor) decisions tend to start.

In cross functional teams, I often found individuals—who would describe themselves as very logical—debate which direction to take certain projects in ways that were quite illogical or heavily biased. There were always these little corporate vendettas people were engaging in, small emotional infractions which had become capital crimes, and past experiences which never ceased to influence what software people should build and why. When I reflected on this and watched teams debate ideas, I started to wonder if we could hire a debate coach to just work on very basic things like removing logical fallacies from arguments. I found nobody who could do such work.

So I started trying to build a game that might simulate this learning. In essence, the goal was to do several things.

  1. Expose people to cognitive biases and logical fallacies (this felt like the natural, first step).

  2. Have individuals role play these biases in simulated debates (a form of elaborated learning)

  3. And reward people for being able to spot them, or act them out, undetected.

I’d hoped this kind of format would expand the bias and logical fallacy vocabularies individuals had—well beyond confirmation biases, which were often the only biases cited when debates emerged, especially, at the most dramatic apex of corporate arguments. Equally, I hoped the game might train individuals to really listen to arguments as they were unfolding, focusing them on the key points, and not the individuals (ad hominem).

So I made one version of this game and started giving it to friends to play around with. I called it Homo Rationalis and included a bunch of pre-loaded scenarios that people could discuss; these were the conversational simulations that allow people to role play the biases in question.

I eventually got some feedback, made some tweaks to the game, and also just made it more affordable to produce. Initially, I was running down to a local snap printing and producing copies which would cost me around $300 for one deck—hardly a sustainable way of helping improve decision making if the primary decision in making it would send me broke. With a bit more time up my sleeve in between semesters, I started making a much more professional version. It comes in a box now and with a better set of instructions, and now, it can be produced and re-ordered when I run out of copies.

After I began selling these though, I realised I actually cared a lot more about this space than I did when I went into it. I was attending lectures and seminars at university (as a 40 year old mature age student) and noticed just how much time was spent trying to ‘teach critical thinking’ to students, which was, quite frankly, pretty poor. There was a lot of asking students to reflect on things which were quite obvious, but not a lot of real meaningful training going on. What I’d found was that my game was a form of elaborative learning; as students role play biases, and have to pick sides of an argument, occasionally sides they do not agree with, it forces them to re-learn how to listen, and how to make arguments for things and avoid specific fallacies and biases in real time.

To be fair, the game is not easy, because it is frequently not easy to role play biases, which are highly varied, in scenarios which you commonly don’t see them appear in. For instance, it is difficult to act out a social desirability bias when debating if Pho is better than Ramen. But, my suspicion, is that this difficulty layer actually ads to the learning process, by having individuals process the concepts at more complex layers of cognitive deliberation (layers-of-processing-effect). But I do think it is effective and with time, I’m hoping to try and run a study around it to see if I can actually improve, maybe with a kind of randomised trial, the decision making abilities of individuals with exposure to the game.

So here is my question—my ask.

Firstly, obviously it would be great if anyone wanted to try the game and tell me what they think—that would be great. You can purchase it here (it’s priced as cheaply as I can make it) and it comes in a fancy box. It just has one flat rate for shipping, and in reality, some locations from which people will order it will result in me losing money, but on average, so long as i don’t sell all my copies in Montenegro, it should average out okay. My intention is to use the profits from the sale of this game to fund a superior form of critical thinking training for high-school and university students that is actually effective, and turn this hole thing into a social enterprise. My experiences at University was quite formative, and I wanted to try and help in this way, by using the game profits as a funding mechanism to help younger people in these dimensions at a time where, the internet more broadly, seems to be eroding one cognitive faculty after another.

Secondly, does anyone have any suggestions about some dependent variables that I might be able to track with an experiment like this game? The problem I’ve found when looking at doing this is that certain biases lend themselves to certain experiments, but broader, decision making abilities (related to biases and fallacies) are a bit trickier to operationalise. For instance, there does exist implicit biases tests, but this only tests this particular faculty. There exists tests for things like the Hawthorne effect (observer effect). I can test, individually, how susceptable someone is against certain biases, but what im really trying to do is measure how susceptable, overall, they are to cognitive biases and logical fallacies.

The only way I figured it could be done is to play out a video, for example, of say two podcasters arguing some point—vaccines cause autism, say—and have individuals identify the cognitive biases and fallacies, if they exist, in certain media clips. This would be a kind of ‘exam’ at the end of the program.

I figured, given the demographic and interests of Lesswrong, someone might be able to suggest a DV I could use to try and run some experiments.