If I have two diffrerent data and compress them well among each of them I would not expect those compressions to be similar or the same.
If I drop two staplers, I can give the same compressed description of the data from their two trajectories: “uniform downward acceleration at close to 9.8 meters per second squared”.
But then the fence can suddenly come to an end or make an unexpected 90 degree turn. How many posts do you need to see to reasonably conclude that post number #5000 exists?
If I found the blueprint for the fence lying around, I’d assign a high probability that the number of fenceposts is what’s shown in the blueprint, minus any that might be knocked over or stolen. Otherwise, I’d start with my priori knowledge of the distribution of sizes of fences, and update according to any observations I make about which reference class of fence this is, and yes, how many posts I’ve encountered so far.
It seems like you haven’t gotten on board with science being a reverse-engineering process that outputs predictive models. But I don’t think this is a controversial point here on LW. Maybe it would help to clarify that a “predictive model” outputs probability distributions over outcomes, not predictions of single forced outcomes?
And if I release two balloons they will have “uniform upward acceleration at close to 9.8 meters per second squared until terminal velocity”. For proper law like things you expect them to hold with no or minimal revision. That it is a compression makes application to new cases complicated. How do you compress something you don’t have access to?
How do you know that a given blue piece of paper is a blueprint for a given fence?
The degree of reasonableness comes from stuff like 5001 post fence and a 4999 post fence being both possible. If induction was rock solid then you would very fast or immidietly believe in an infinite length fence. But induction is unreliable and points in a different direction than just checking whether each post is there. Yet we often find ourself ina situation where we have made some generalization checked, them some but not exhaustively and would like to call our epistemic state as “knowing” the fact.
If I drop two staplers, I can give the same compressed description of the data from their two trajectories: “uniform downward acceleration at close to 9.8 meters per second squared”.
If I found the blueprint for the fence lying around, I’d assign a high probability that the number of fenceposts is what’s shown in the blueprint, minus any that might be knocked over or stolen. Otherwise, I’d start with my priori knowledge of the distribution of sizes of fences, and update according to any observations I make about which reference class of fence this is, and yes, how many posts I’ve encountered so far.
It seems like you haven’t gotten on board with science being a reverse-engineering process that outputs predictive models. But I don’t think this is a controversial point here on LW. Maybe it would help to clarify that a “predictive model” outputs probability distributions over outcomes, not predictions of single forced outcomes?
And if I release two balloons they will have “uniform upward acceleration at close to 9.8 meters per second squared until terminal velocity”. For proper law like things you expect them to hold with no or minimal revision. That it is a compression makes application to new cases complicated. How do you compress something you don’t have access to?
How do you know that a given blue piece of paper is a blueprint for a given fence?
The degree of reasonableness comes from stuff like 5001 post fence and a 4999 post fence being both possible. If induction was rock solid then you would very fast or immidietly believe in an infinite length fence. But induction is unreliable and points in a different direction than just checking whether each post is there. Yet we often find ourself ina situation where we have made some generalization checked, them some but not exhaustively and would like to call our epistemic state as “knowing” the fact.