One, this indicates that a good way to teach history and rational thinking at the same time would be to present historical data up to a set point, ask students to reason out what they think will happen next in history, and then reveal what actually happened and use the feedback to calibrate and improve our historical reasoning (which will hopefully provide some benefit in other domains).
I really like this idea, but it seems like it could be tricky to do this well (or alternatively, easy to do this badly). The key question is what data you present to people, given that you clearly can’t present it all. Randomly selecting data risks missing useful stuff; while consciously selecting data risks falsely confirming the selector’s theories about why stuff happened.
The first risk is probably less of a concern, and maybe getting students to ask for/research the data they think might be useful could help get around any remaining worries (as well as training people to ask the right questions)?
Another concern is getting rid of the data in the participants heads. The biggest reason I am not terribly interested in alternate histories is that I do not know much about the history that happened. What I do know came from textbooks.
Shifting topics back to the original post, I have little issue with alternate histories choosing an ending and finding choices that could have led to that result. There is no reason bias needs to pollute this scenario.
I would probably be more interested in alternate histories that asked questions about things like Enron, Columbine, Bruce Lee, or DeLorean.
Shifting topics back to the original post, I have little issue with alternate histories choosing an ending and finding choices that could have led to that result. There is no reason bias needs to pollute this scenario.
Once you know the conclusion you want, though, the human mind is very good at finding a way to make the details fit in between and make it feel like you’re right. You’re right that bias doesn’t have to exist: there’s nothing about having the conclusion first that inherently renders all hypothesized steps to reach it biased. After all, most sciences start out knowing the conclusion and have to work back to the source. But with alternate history rationalization easily sneaks in, so it’s better to work the other direction so you can try to push the outcomes somewhere other than where they really ought to flow.
I agree. I think the flip-side is the temptation to claim the alternate goal is repressed but still feel disappointed when your alternate history of the Cold War ends with similar results. When I think of “alternate” I think “drastically different” not “minor details changed.” But that is probably easier to dodge than pushing the result you want.
I really like this idea, but it seems like it could be tricky to do this well (or alternatively, easy to do this badly). The key question is what data you present to people, given that you clearly can’t present it all. Randomly selecting data risks missing useful stuff; while consciously selecting data risks falsely confirming the selector’s theories about why stuff happened.
The first risk is probably less of a concern, and maybe getting students to ask for/research the data they think might be useful could help get around any remaining worries (as well as training people to ask the right questions)?
I agree. It’s a problem I’m not really sure of how to deal with and will have to work on hammering out if we do this.
Another concern is getting rid of the data in the participants heads. The biggest reason I am not terribly interested in alternate histories is that I do not know much about the history that happened. What I do know came from textbooks.
Shifting topics back to the original post, I have little issue with alternate histories choosing an ending and finding choices that could have led to that result. There is no reason bias needs to pollute this scenario.
I would probably be more interested in alternate histories that asked questions about things like Enron, Columbine, Bruce Lee, or DeLorean.
Once you know the conclusion you want, though, the human mind is very good at finding a way to make the details fit in between and make it feel like you’re right. You’re right that bias doesn’t have to exist: there’s nothing about having the conclusion first that inherently renders all hypothesized steps to reach it biased. After all, most sciences start out knowing the conclusion and have to work back to the source. But with alternate history rationalization easily sneaks in, so it’s better to work the other direction so you can try to push the outcomes somewhere other than where they really ought to flow.
I agree. I think the flip-side is the temptation to claim the alternate goal is repressed but still feel disappointed when your alternate history of the Cold War ends with similar results. When I think of “alternate” I think “drastically different” not “minor details changed.” But that is probably easier to dodge than pushing the result you want.