I quite like this framing, and think Strategic Competence is a useful term and concept. I explored a related idea in Wise AI Advisors at the Hinge of History :
I posit that if:
We have trusted AI systems that people turn to as advisors
And the trust in the AI advisors is well placed because they have good epistemics
Where “good epistemics” roughly means they consistently use reliable methods to figure out what is true and avoid self-deception
And the AI advisors have shared epistemics
Where “shared epistemics” roughly means there is a shared foundation that allows different AI advisors and people to trust one another’s reasoning.
Then it implies that those AI Advisors would advise their Principals to avoid an Intelligence Explosion—if and only if this is in fact a real danger—and humanity could coordinate around this advice.
I expect Strategic Competence to largely track general model capabilities, whereas shared trusted epistemics requires more deliberate work on validation, auditing, and institution-building that won’t happen by default.
This overlaps with your points on improving AI philosophical competence, but with more focus on making epistemics verifiable and legible across different systems and actors, which is what I think would be needed (alongside model improvements) to enable guidance that people follow, and common knowledge needed for preference cascades, to get many actors to agree with wise AI advisors to prevent RSI takeoffs.
It’s much much easier now to automate mechanisms like liquid democracy; you could run experiments inside organizations that would be a.) fun and b.) test practicality. Google ran an experiment in 2015 using this to select snacks; do it again but with AI delegates either representing your preferences or making snack decisions.