I think that an attempted EFA is a strong argument and I think people should usually take it seriously.
I can see 2 reasons why you should remain mostly unconvinced by an EFA:
You have a good object-level reason to think the process that generated the elements is not sufficiently good, for example:
the person listing the possible things it could be is cherry-picking the “things it could be” (this is less of a concern if an adversary is picking the “things it could be”)
the adversary generating the “things it could be” is not sufficiently strong when the question is about strategies an adversary could use (see Davidmanheim’s comment)
You have an extremely strong prior (e.g. against things that break the laws of physics) that there is something wrong with the argument.
If your prior is that P(it’s A)=0.3, P(it’s B)=0.3, P(it’s C)=0.3, P(it’s something else you can’t think of)=0.099, and P(claim)=0.001, then learning that it’s neither A, B or C, then your new beliefs should be P(claim)=0.01 and P(it’s something else you can’t think of)=0.99.
You should update, but not all the way. (Note that only putting p=0.099 on “it’s sth else” means you are quite knowledgeable about the domain. If you are not knowledgeable about the domain, most of your probability mass should be on “it’s sth else” in which case the update will be even smaller.)
When neither of these reasons apply, I think skepticism against EFA is unwarranted.
I think that an attempted EFA is a strong argument and I think people should usually take it seriously.
I can see 2 reasons why you should remain mostly unconvinced by an EFA:
You have a good object-level reason to think the process that generated the elements is not sufficiently good, for example:
the person listing the possible things it could be is cherry-picking the “things it could be” (this is less of a concern if an adversary is picking the “things it could be”)
the adversary generating the “things it could be” is not sufficiently strong when the question is about strategies an adversary could use (see Davidmanheim’s comment)
You have an extremely strong prior (e.g. against things that break the laws of physics) that there is something wrong with the argument.
If your prior is that P(it’s A)=0.3, P(it’s B)=0.3, P(it’s C)=0.3, P(it’s something else you can’t think of)=0.099, and P(claim)=0.001, then learning that it’s neither A, B or C, then your new beliefs should be P(claim)=0.01 and P(it’s something else you can’t think of)=0.99.
You should update, but not all the way. (Note that only putting p=0.099 on “it’s sth else” means you are quite knowledgeable about the domain. If you are not knowledgeable about the domain, most of your probability mass should be on “it’s sth else” in which case the update will be even smaller.)
When neither of these reasons apply, I think skepticism against EFA is unwarranted.