Maybe you could do something with LLM sentiment analysis of participants’ conversations (e.g. when roleplaying discussing what the best thing to do for the company, genuinely trying to do a good job both before and after).
Though for such a scenario, an important thing I imagine is that learning about fallacies only has a limited relation, and only if people learn to notice them in themselves, not just in someone they already disagree with.
We have been playing around with this stuff (for another project). As in, recording conversations and then trying to mark out the instances of the fallacies and biases. Its not highly accurate right now, but we’re trying to turn it into something a bit more fun / usable. But like you said, the biases / fallacies are just a small, discrete part of the whole story really. But we wanted to start somewhere.
Maybe you could do something with LLM sentiment analysis of participants’ conversations (e.g. when roleplaying discussing what the best thing to do for the company, genuinely trying to do a good job both before and after).
Though for such a scenario, an important thing I imagine is that learning about fallacies only has a limited relation, and only if people learn to notice them in themselves, not just in someone they already disagree with.
We have been playing around with this stuff (for another project). As in, recording conversations and then trying to mark out the instances of the fallacies and biases. Its not highly accurate right now, but we’re trying to turn it into something a bit more fun / usable. But like you said, the biases / fallacies are just a small, discrete part of the whole story really. But we wanted to start somewhere.