I find this idea (or a close relative) a useful guide for resolving a heuristic explanation or judgment into a detailed, causal explanation or consequentialist judgment. If someone draws me a engine cycle that creates infinite work out of finite heat (Question 5), I can say it violates the laws of thermodynamics. Of course their engine really is impossible. But there’s still confusion: our explanations remain in tension because something’s left unexplained. To fully resolve this confusion, I have to look in detail at their engine cycle, and find the error that allows the violation.
Principled explanations, especially about human behavior or society, tend to come into tension in a similar way. That tension can similarly point the way to detailed, causal explanations that will dissolve the question. For example, you say that an idea meeting a counter-idea may well fail to generate facts, which is contrary to your understanding of dialectics. It’s not very useful to merely state these ideas in opposition to each other, but there’s something to be learned by looking at where they conflict and why.
So in this case, where you doubt that this process generates facts, consider how it might or might not reliably do so. One way it could do so is if there were a recipe for turning the conflict into an opportunity for learning, like “look for detailed causal mechanisms where the two big ideas directly conflict.” One way it might fail is if people who held each one of the two ideas entrenched themselves as opposed to the other, and everyone continued to simply talk past one another without attempting to understand. Now you’ve refined your heuristic so you can better judge how well this will work in individual cases, and you can iterate.
I think of the moral version of this as a generalization of the argument from marginal cases against giving moral standing to humans alone (i.e. that there’s no value-relevant principle that selects all and only humans). The generalization is to come at this from both sides of a debate, and say that you can expect any principled judgment to fail on marginal cases. The content of your principle is in large part how it treats those marginal cases. From this perspective, you study the marginal cases to improve your understanding of your values, rather than try to use heuristics to decide the marginal cases. (Sometimes this perspective is useful, and sometimes it’s not. Hmm, why is that?)
I find this idea (or a close relative) a useful guide for resolving a heuristic explanation or judgment into a detailed, causal explanation or consequentialist judgment. If someone draws me a engine cycle that creates infinite work out of finite heat (Question 5), I can say it violates the laws of thermodynamics. Of course their engine really is impossible. But there’s still confusion: our explanations remain in tension because something’s left unexplained. To fully resolve this confusion, I have to look in detail at their engine cycle, and find the error that allows the violation.
Principled explanations, especially about human behavior or society, tend to come into tension in a similar way. That tension can similarly point the way to detailed, causal explanations that will dissolve the question. For example, you say that an idea meeting a counter-idea may well fail to generate facts, which is contrary to your understanding of dialectics. It’s not very useful to merely state these ideas in opposition to each other, but there’s something to be learned by looking at where they conflict and why.
So in this case, where you doubt that this process generates facts, consider how it might or might not reliably do so. One way it could do so is if there were a recipe for turning the conflict into an opportunity for learning, like “look for detailed causal mechanisms where the two big ideas directly conflict.” One way it might fail is if people who held each one of the two ideas entrenched themselves as opposed to the other, and everyone continued to simply talk past one another without attempting to understand. Now you’ve refined your heuristic so you can better judge how well this will work in individual cases, and you can iterate.
I think of the moral version of this as a generalization of the argument from marginal cases against giving moral standing to humans alone (i.e. that there’s no value-relevant principle that selects all and only humans). The generalization is to come at this from both sides of a debate, and say that you can expect any principled judgment to fail on marginal cases. The content of your principle is in large part how it treats those marginal cases. From this perspective, you study the marginal cases to improve your understanding of your values, rather than try to use heuristics to decide the marginal cases. (Sometimes this perspective is useful, and sometimes it’s not. Hmm, why is that?)