A reasonable an idea for this and other problems that don’t’ seem to suffer from ugly asymptotics would simply to mechanically test it.
That is to say that it may be more efficient, requiring less brain power, to believe the results of repeated simulations. After going through the Monty Hall tree and statistics with people who can’t really understand either, then end up believing the results of a simulation whose code is straightforward to read, I advocate this method—empirical verification over intuition or mathematics that are fallible (because you yourself are fallible in your understanding, not because they contain a contradiction).
I don’t see this as a valid criticism, if it intended to be a dismissal. The addendum “beware this temptation” is worth highlighting. While this is a point worth making, the response “but someone would have noticed” is shorthand for “if your point was correct, others would likely believe it as well, and I do not see a subset of individuals who also are pointing this out.”
Let’s say there are ideas that are internally inconsistent or rational or good (and are thus not propounded) and ideas that are internally consistent or irrational or bad. Each idea comes as a draw from a bin of ideas, with some proportion that are good and some that are bad.
Further, each person has an imperfect signal on whether or not an idea is good or not. Finally, we only see ideas that people believe are good, setting the stage for sample selection.
Therefore, when someone is propounding an idea, the fact that you have not heard of it before makes it more likely to have been censored—that is, more likely to have been judged a bad idea internally and thus never suggested. I suggest as a bayesian update that, given you have never heard the idea before, it is more likely to be internally inconsistent/irrational/bad than if you hear it constantly, the idea having passed many people’s internal consistency checks.