Dan Kahan’s other experimental work over the last 8 years or so probably has further useful ideas. Adapting tests from the heuristics & biases literature (e.g. this old review article) may also work, depending on what you wish to accomplish.
There is a potential pitfall in directly testing people’s general knowledge on contested issues. People who score poorly on test questions about issue X could simply complain that the test designer is the one who’s wrong about issue X, not themselves, and unless you’re absolutely sure of the correct answers to the relevant questions yourself, you can’t eliminate the possibility that the test is unfair. One way to skirt around this problem is to ask people about uncontested, well-established facts like election results in countries with relatively democratic reputations, or by asking people about things you know to be false because you made them up, like fake, exaggerated quotations from political figures.
Great! Thanks. Kahan’s papers are very useful. In one paper he and his colleagues ask not whether some policy-relevant claim X (such as whether climate change is caused by human activities) is true, but rather whether expert scientists generally agree that X is true, or generally agree that X is false, or are divided. The latter is much easier to establish (conveniently, the US National Academy of Sciences publishes ‘expert consensus reports’ from which Kahan’s examples are taken). As expected, people’s beliefs match their political opinions in a suspicious manner: “hierarchical individualists” (roughly conservatives) tend to believe that there is no expert consensus on climate change being caused by humans even though there is, whereas very few “egalitarian communitarians” believe that there is an expert consensus on geological isolation of nuclear waste being safe, even though there is.
Dan Kahan’s other experimental work over the last 8 years or so probably has further useful ideas. Adapting tests from the heuristics & biases literature (e.g. this old review article) may also work, depending on what you wish to accomplish.
There is a potential pitfall in directly testing people’s general knowledge on contested issues. People who score poorly on test questions about issue X could simply complain that the test designer is the one who’s wrong about issue X, not themselves, and unless you’re absolutely sure of the correct answers to the relevant questions yourself, you can’t eliminate the possibility that the test is unfair. One way to skirt around this problem is to ask people about uncontested, well-established facts like election results in countries with relatively democratic reputations, or by asking people about things you know to be false because you made them up, like fake, exaggerated quotations from political figures.
Great! Thanks. Kahan’s papers are very useful. In one paper he and his colleagues ask not whether some policy-relevant claim X (such as whether climate change is caused by human activities) is true, but rather whether expert scientists generally agree that X is true, or generally agree that X is false, or are divided. The latter is much easier to establish (conveniently, the US National Academy of Sciences publishes ‘expert consensus reports’ from which Kahan’s examples are taken). As expected, people’s beliefs match their political opinions in a suspicious manner: “hierarchical individualists” (roughly conservatives) tend to believe that there is no expert consensus on climate change being caused by humans even though there is, whereas very few “egalitarian communitarians” believe that there is an expert consensus on geological isolation of nuclear waste being safe, even though there is.