In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.
I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.
Edit: Please don’t just upvote, try to point to similar ideas in your respective field or critique the idea.
There are concept inventories in a lot of fields, but these vary in quality and usefulness. The most well-known of these is the Force Concept Inventory for first semester mechanics, which basically aims to test how Aristotelian/Newtonian a student’s thinking is. Any physicist can point out a dozen problems with it, but it seems to very roughly measure what it claims to measure.
Russ Roberts (host of the podcast EconTalk) likes to talk about the “economic way of thinking” and has written and gathered links about ten key ideas like incentives, markets, externalities, etc. But he’s relatively libertarian, so the ideas he chose and his exposition will probably not provide a very complete picture. Anyway, EconTalk has started asking discussion questions after each podcast, some of which aim to test basic understanding along these lines.
I’ve often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being “I’ve never heard of this concept”, and 5 being “I could build one of these myself from scratch”.
The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.)
When someone sits the test, you report their overall score relative to your calibrated sitters (“You scored 76, which puts you at undergrad level”), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they
frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they’re lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term.
Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.
Look up Bayesian Truth Serum, not exactly what you’re talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.
One problem that could crop up if you’re not careful is a control term being used in an educational source not considered—a class, say, or a nonstandard textbook. I have a non-Euclidean geometry book that uses names for Euclidean geometry features that I certainly never encountered in geometry class. If those terms had been placed as controls, I would provide a non-zero rating for them.
Do you mean to build the system or to populate it with content? The former would be “me, unless I get bored or run out of time and impetus”, and the latter is “whichever domain experts I can convince to list and rank terms from their discipline”.
In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.
I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.
Edit: Please don’t just upvote, try to point to similar ideas in your respective field or critique the idea.
There are concept inventories in a lot of fields, but these vary in quality and usefulness. The most well-known of these is the Force Concept Inventory for first semester mechanics, which basically aims to test how Aristotelian/Newtonian a student’s thinking is. Any physicist can point out a dozen problems with it, but it seems to very roughly measure what it claims to measure.
Russ Roberts (host of the podcast EconTalk) likes to talk about the “economic way of thinking” and has written and gathered links about ten key ideas like incentives, markets, externalities, etc. But he’s relatively libertarian, so the ideas he chose and his exposition will probably not provide a very complete picture. Anyway, EconTalk has started asking discussion questions after each podcast, some of which aim to test basic understanding along these lines.
It seems to me like something that can be solved by a community driven website where users can vote on questions.
I’ve often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being “I’ve never heard of this concept”, and 5 being “I could build one of these myself from scratch”.
The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.)
When someone sits the test, you report their overall score relative to your calibrated sitters (“You scored 76, which puts you at undergrad level”), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they’re lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term.
Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.
Look up Bayesian Truth Serum, not exactly what you’re talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.
This is all sorts of useful. Thanks.
One problem that could crop up if you’re not careful is a control term being used in an educational source not considered—a class, say, or a nonstandard textbook. I have a non-Euclidean geometry book that uses names for Euclidean geometry features that I certainly never encountered in geometry class. If those terms had been placed as controls, I would provide a non-zero rating for them.
Who’s going to do the rather substantial amount of work needed to put the system together?
Do you mean to build the system or to populate it with content? The former would be “me, unless I get bored or run out of time and impetus”, and the latter is “whichever domain experts I can convince to list and rank terms from their discipline”.
I was thinking about the work involved in populating it.