I have no evidence for this but I have a vibe that if you build a proper mathematical model of agency/co-agency, then prediction and steering will end up being dual to one another.
My intuition why:
A strong agent can easily steer a lot of different co-agents; those different co-agents will be steered towards the same goals of the agent.
A strong co-agent is easily predictable by a lot of different agents; those different agents will all converge on a common map of the co-agent.
Also, category theory tells us that there is normally only one kind of thing, but sometimes there are two things. One example sums and products of sets, which are co- to each other (the sum can actually be called the coproduct) there is no other operation on sets which is as natural as sums and products.
I lean towards all epistemic environments being adversarial unless proven otherwise based on strong outside-view evidence (e.g. your colleagues at a trading firm, who you regularly see trading successfully using strategies they freely discuss with you). Maybe I’m being too paranoid, but I think that the guf in the back of your mind is filled with memetic tigers, and sometimes those sneak out and pounce into your brain. Occasionally, they turn out to be excellent at hunting down your friends and colleagues as well.
An adversarial epistemic environment functions similarly to a normal adversarial environment, but in reverse. Instead of any crack in your code (therefore, a crack in the argument that your code is secure) being exploitable, the argument comes into your head already pre-exploited for maximum memetic power. And using an EFA is one way to create a false argument that’s highly persuasive.
I also think that—in the case where the EFA turns out to be correct—it’s not too hard to come up with supporting evidence. Either a (good) reference-class argument (though beware any choice of reference class!) or some argument as to why your search really is exhaustive.