I *knew* that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it’s good that it doesn’t explain many many things.
It’s a bit like David Deutsch arguing that models should be sensitive to small changes. All of their elements should be important.
It’s because a good model should fail to explain falsehoods. You can think of this in a mathy way as a model taking data and using it to narrow down which possible world you’re in. If it doesn’t narrow it down, it’s rather uninformative!
Trying to understand this.
I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it’s good that it doesn’t explain many many things.
It’s a bit like David Deutsch arguing that models should be sensitive to small changes. All of their elements should be important.
It’s because a good model should fail to explain falsehoods. You can think of this in a mathy way as a model taking data and using it to narrow down which possible world you’re in. If it doesn’t narrow it down, it’s rather uninformative!