There is an idea I have used by implication in the OP, but might benefit from being identified specifically. This idea is that the level of abstraction where a concept is applied matters.
To illustrate what I mean, consider the end of the confused statements quote from the MIRI post:
Today, these conversations are different. In between, folks worked to make themselves and others less fundamentally confused about these topics—so that today, a 14-year-old who wants to skip to the end of all that incoherence can just pick up a copy of Nick Bostrom’s Superintelligence.
I think it would be reasonable for someone reading my post to look at that section of the MIRI post and then ask: so what is the Superintelligence of strategy? My answer is that there isn’t one yet; this is what Sun Tzu and Clausewitz tried and failed to accomplish. I don’t believe we have a good enough understanding of the component disciplines of strategy to write one, either (consider our mastery of computer science and information theory relative to our mastery of political science, economics and psychology). We are too confused.
I think the key insight of Meiser’s approach is that he applies scientific reasoning as a generative rule for a strategy instance, rather than trying to describe a science of strategy in general and leaving the instance as an exercise for the reader. In other words, he took the scientific perspective and aimed it one layer of abstraction down. This allows us to account for confusion.
The level of abstraction is a big reason deconfusion is so awesome: it works no matter where you aim it, even aiming-at-aiming.