I lean towards all epistemic environments being adversarial unless proven otherwise based on strong outside-view evidence (e.g. your colleagues at a trading firm, who you regularly see trading successfully using strategies they freely discuss with you). Maybe I’m being too paranoid, but I think that the guf in the back of your mind is filled with memetic tigers, and sometimes those sneak out and pounce into your brain. Occasionally, they turn out to be excellent at hunting down your friends and colleagues as well.
An adversarial epistemic environment functions similarly to a normal adversarial environment, but in reverse. Instead of any crack in your code (therefore, a crack in the argument that your code is secure) being exploitable, the argument comes into your head already pre-exploited for maximum memetic power. And using an EFA is one way to create a false argument that’s highly persuasive.
I also think that—in the case where the EFA turns out to be correct—it’s not too hard to come up with supporting evidence. Either a (good) reference-class argument (though beware any choice of reference class!) or some argument as to why your search really is exhaustive.
I lean towards all epistemic environments being adversarial unless proven otherwise based on strong outside-view evidence (e.g. your colleagues at a trading firm, who you regularly see trading successfully using strategies they freely discuss with you). Maybe I’m being too paranoid, but I think that the guf in the back of your mind is filled with memetic tigers, and sometimes those sneak out and pounce into your brain. Occasionally, they turn out to be excellent at hunting down your friends and colleagues as well.
An adversarial epistemic environment functions similarly to a normal adversarial environment, but in reverse. Instead of any crack in your code (therefore, a crack in the argument that your code is secure) being exploitable, the argument comes into your head already pre-exploited for maximum memetic power. And using an EFA is one way to create a false argument that’s highly persuasive.
I also think that—in the case where the EFA turns out to be correct—it’s not too hard to come up with supporting evidence. Either a (good) reference-class argument (though beware any choice of reference class!) or some argument as to why your search really is exhaustive.