It is evidence for the natural abstraction hypothesis in the technical sense that P[NAH|paper] is greater than P[NAH], but in practice that’s just not a very good way to think about “X is evidence for Y”, at least when updating on published results. The right way to think about this is “it’s probably irrelevant”.
Thank you John! Is there an high-bit or confounder controlling evidence that would move your prior? Say something like english + some other language? (Also I might be missing something deeper about the heuristic in general, if so I apologize!)
My default assumption on all empirical ML papers is that the authors Are Not Measuring What They Think They Are Measuring.
It is evidence for the natural abstraction hypothesis in the technical sense that P[NAH|paper] is greater than P[NAH], but in practice that’s just not a very good way to think about “X is evidence for Y”, at least when updating on published results. The right way to think about this is “it’s probably irrelevant”.
Thank you John! Is there an high-bit or confounder controlling evidence that would move your prior? Say something like english + some other language? (Also I might be missing something deeper about the heuristic in general, if so I apologize!)