I still don’t see that you’ve demonstrated overmathematization as a hindering factor.
-Finance gurus used advanced math. -Finance gurus made bad assumptions about the mortgage market.
How have you shown that one caused the other? What method (that you should have presented in your first post instead of dragging this out to at least four) would have led finance gurus to not make bad assumptions, and would have directed them toward less math?
I agree that it’s gotten to the point where academia adheres to standards that don’t actually maximize research progress, and too often try to look impressive at the expense of doing something truly worthwhile. But what alternate epistemology do you propose that could predictably counteract this tendency? I’m still waiting to hear it.
(And the error in assumptions was made by practitioners, where the incentive to produce meaningful results is much stronger, because they actually get a chance to be proven wrong by nature.)
But what alternate epistemology do you propose that could predictably counteract this tendency?
I think the compression principle provides a pretty stark criterion. If a mathematical result can be used to achieve an improved compression rate on a standard empirical dataset, it’s a worthy contribution to the relevant science. If it can’t, then it still might be a good result, but it should be sent to a math journal, not a science journal.
I still don’t see that you’ve demonstrated overmathematization as a hindering factor.
-Finance gurus used advanced math.
-Finance gurus made bad assumptions about the mortgage market.
How have you shown that one caused the other? What method (that you should have presented in your first post instead of dragging this out to at least four) would have led finance gurus to not make bad assumptions, and would have directed them toward less math?
I agree that it’s gotten to the point where academia adheres to standards that don’t actually maximize research progress, and too often try to look impressive at the expense of doing something truly worthwhile. But what alternate epistemology do you propose that could predictably counteract this tendency? I’m still waiting to hear it.
(And the error in assumptions was made by practitioners, where the incentive to produce meaningful results is much stronger, because they actually get a chance to be proven wrong by nature.)
I think the compression principle provides a pretty stark criterion. If a mathematical result can be used to achieve an improved compression rate on a standard empirical dataset, it’s a worthy contribution to the relevant science. If it can’t, then it still might be a good result, but it should be sent to a math journal, not a science journal.
I think the problem with overmathematization is that it adds prestige to theories while making them harder to check.