OK, here’s an exercise that could at least help noticing how motivated cognition gives us incorrect beliefs; it’s like a mix between calibration and paranoid debating.
Requirements
This exercise requires a set of “interesting questions” with a numerical answer, for example “How many people were killed by guns in New York in 1990-1999?” or “What is the unemployment rate in China” (the questions should be at least related to political/social issues, no “What’s the population of Mozambique”).
It is also best done in a classroom-type place with a big video projector, and a bunch of computers with internet connections. Somebody who knows Excel should also need to prepare a special spreadsheet.
Step one: Crafting Arguments
Students are together in a room, with one computer each (or they can take turns using a computer, timing isn’t critical); an organizer goes to each student and gives him a paper with the question, then flips a coin and announces “high!” if it’s heads, and “low!” if it’s tails.
Each student then has 30 minutes to prepare a set of arguments for why that particular value is high, or low. The result should be one powerpoint slide (or maybe better, the Google docs equivalent), containing his best arguments; he is allowed to look anything up on the internet (including the true value), but his slide can only contain true information (“New York was rated the most violent city by XXX magazine”, things like that).
Step two: Everybody guesses
Once everybody is ready, the organizers collect all the argument powerpoint slides; for each one of them: the question is read aloud, and then the list of arguments are displayed, as well as whether those are arguments for a high or a low value. Each student (except the arguer) writes down his best guess for the answer to the question, as well as a 90% confidence interval; they should be given about thirty seconds.
Once the time is up, everybody reads his answer out loud to an organizer who enters them into Excel and immediately gets a nice chart (projected on the wall), with everybody’s answers and confidence intervals compared to the true answer, and then a scatterplot showing how narrow people’s intervals were compared and how close they were to the answer.
The clever arguer gets points for how many estimates were too high (/too low), and how wide the confidence intervals were on the side he argued; others get points for the probability they assigned to the correct answer (log scoring rule etc.)
Comments
If this works (if people do indeed bias their estimates despite knowing that the arguments are the result of the flip of a coin, and if they learn to correct this as the exercise progresses), it should give participants a “gut level” understanding of why motivated cognition gives wrong belief (Our conceptual understanding of ‘motivated cognition’, and why it’s defective as a cognitive algorithm—the “Bottom Line” insight.).
This exercised is designed to make the feedback as close as possible to the estimation, to make learning stronger. Having it in a slightly competitive setting also discourages people from just giving huge confidence intervals.
It may be interesting to first collect a bunch of estimations from people who won’t participate, just to compare them to the student’s estimates on the same questions, so that the students can then be shown the difference between “how people guess with the bottom line argument” and “how people guess without the bottom line argument” (normally, they should guess worse).
A simple variant is having yes-or-no questions, and students giving probability estimates.
Another variant is to have multiple choice questions (ideally with six possible answers, so the organizers can roll a die); this simplifies the guessing (no more confidence intervals!), and questions about numerical values can be transformed into multiple choice questions with a list of intervals.
OK, here’s an exercise that could at least help noticing how motivated cognition gives us incorrect beliefs; it’s like a mix between calibration and paranoid debating.
Requirements
This exercise requires a set of “interesting questions” with a numerical answer, for example “How many people were killed by guns in New York in 1990-1999?” or “What is the unemployment rate in China” (the questions should be at least related to political/social issues, no “What’s the population of Mozambique”).
It is also best done in a classroom-type place with a big video projector, and a bunch of computers with internet connections. Somebody who knows Excel should also need to prepare a special spreadsheet.
Step one: Crafting Arguments
Students are together in a room, with one computer each (or they can take turns using a computer, timing isn’t critical); an organizer goes to each student and gives him a paper with the question, then flips a coin and announces “high!” if it’s heads, and “low!” if it’s tails.
Each student then has 30 minutes to prepare a set of arguments for why that particular value is high, or low. The result should be one powerpoint slide (or maybe better, the Google docs equivalent), containing his best arguments; he is allowed to look anything up on the internet (including the true value), but his slide can only contain true information (“New York was rated the most violent city by XXX magazine”, things like that).
Step two: Everybody guesses
Once everybody is ready, the organizers collect all the argument powerpoint slides; for each one of them: the question is read aloud, and then the list of arguments are displayed, as well as whether those are arguments for a high or a low value. Each student (except the arguer) writes down his best guess for the answer to the question, as well as a 90% confidence interval; they should be given about thirty seconds.
Once the time is up, everybody reads his answer out loud to an organizer who enters them into Excel and immediately gets a nice chart (projected on the wall), with everybody’s answers and confidence intervals compared to the true answer, and then a scatterplot showing how narrow people’s intervals were compared and how close they were to the answer.
The clever arguer gets points for how many estimates were too high (/too low), and how wide the confidence intervals were on the side he argued; others get points for the probability they assigned to the correct answer (log scoring rule etc.)
Comments
If this works (if people do indeed bias their estimates despite knowing that the arguments are the result of the flip of a coin, and if they learn to correct this as the exercise progresses), it should give participants a “gut level” understanding of why motivated cognition gives wrong belief (Our conceptual understanding of ‘motivated cognition’, and why it’s defective as a cognitive algorithm—the “Bottom Line” insight.).
This exercised is designed to make the feedback as close as possible to the estimation, to make learning stronger. Having it in a slightly competitive setting also discourages people from just giving huge confidence intervals.
It may be interesting to first collect a bunch of estimations from people who won’t participate, just to compare them to the student’s estimates on the same questions, so that the students can then be shown the difference between “how people guess with the bottom line argument” and “how people guess without the bottom line argument” (normally, they should guess worse).
A simple variant is having yes-or-no questions, and students giving probability estimates.
Another variant is to have multiple choice questions (ideally with six possible answers, so the organizers can roll a die); this simplifies the guessing (no more confidence intervals!), and questions about numerical values can be transformed into multiple choice questions with a list of intervals.