If there are a very large number of fundamental particles, it is likely that there is some reason for the number, rather than being a merely random number, and if so, then you would likely reach the truth quicker by using some version of Kolmogorov complexity, rather than counting the particles one by one as you discover them.
This is circular. You assume that the universe is likely to have low K-complexity and conclude that a K-complexity razor works well. Kelly’s work requires no such assumption, this is why I think it’s valuable.
Yes, “It is likely that there is some reason for the number” implies a low Kolmogorov complexity. But it seems to me that if you look at past cases where we have already discovered the truth, you may find that there were cases where there were indeed reasons, and therefore low K-complexity. If so that would give you an inductive reason to suspect that this will continue to hold; this argument would not be circular. In other words, as I’ve said previously, in part we learn by induction which type of razor is suitable.
Is this a “margin is too small to contain” type of thing? Because I would be very surprised if there were a philosophically airtight location where recursive justification hits bottom.
If there are a very large number of fundamental particles, it is likely that there is some reason for the number,
Could you expand on this?
The only reason I can think of is that the particles have various qualities, and we’ve got all the possible combinations.
I assume that there some range of numbers which suggest an underlying pattern—it’s vanishingly unlikely that there’s a significance to the number of stars in the galaxy.
I think there was something in Gregory Bateson about this—that there’s a difference between a number that’s part of a system (he was talking about biology, not physics) as distinct from “many”.
So you claim that a K-complexity prior will work better because our universe is likely to have low K-complexity. This is circular—a justification of Occam’s razor that takes for granted your own intuitive concept of Occam’s razor. Kelly’s work makes no such assumption, that’s why it looks valuable to me.
If there are a very large number of fundamental particles, it is likely that there is some reason for the number, rather than being a merely random number, and if so, then you would likely reach the truth quicker by using some version of Kolmogorov complexity, rather than counting the particles one by one as you discover them.
This is circular. You assume that the universe is likely to have low K-complexity and conclude that a K-complexity razor works well. Kelly’s work requires no such assumption, this is why I think it’s valuable.
Yes, “It is likely that there is some reason for the number” implies a low Kolmogorov complexity. But it seems to me that if you look at past cases where we have already discovered the truth, you may find that there were cases where there were indeed reasons, and therefore low K-complexity. If so that would give you an inductive reason to suspect that this will continue to hold; this argument would not be circular. In other words, as I’ve said previously, in part we learn by induction which type of razor is suitable.
But then you’ve got to justify induction, which is as hard as justifying Occam.
I have a justification for induction too. I may post it at some point.
Is this a “margin is too small to contain” type of thing? Because I would be very surprised if there were a philosophically airtight location where recursive justification hits bottom.
Cool.
Could you expand on this?
The only reason I can think of is that the particles have various qualities, and we’ve got all the possible combinations.
I assume that there some range of numbers which suggest an underlying pattern—it’s vanishingly unlikely that there’s a significance to the number of stars in the galaxy.
I think there was something in Gregory Bateson about this—that there’s a difference between a number that’s part of a system (he was talking about biology, not physics) as distinct from “many”.
So you claim that a K-complexity prior will work better because our universe is likely to have low K-complexity. This is circular—a justification of Occam’s razor that takes for granted your own intuitive concept of Occam’s razor. Kelly’s work makes no such assumption, that’s why it looks valuable to me.