I’m not sure what you’re saying with your second point. There are algorithms that can count the number of rocks on a beach (which take as input the graph of bonds between atoms. If you have an objection to counting bonds, we can instead talk about algorithms that count heaps of rocks based on the proximity of atoms). You could make any of the following objections to “three rocks”:
There is no canonical such algorithm, because such an algorithm needs a scale parameter. I would respond that there are ways for an algorithm to decide on an appropriate scale parameter.
There are many distinct such algorithms which agree when there are obviously three rocks, but disagree on where the boundary between “three rocks” and “two rocks” is. I would respond that this objection only applies to ambiguous arrangements of rocks.
There is a canonical family of rock-counting algorithms, but the fact that we care about rock-counting algorithms is a fact about ourselves, not a fact about the external world. I agree with this.
My second point is all about discretization. The ability to count the number of rocks comes after the ability to say “this is a rock”, and I argue that the reality does not come equipped with a “natural” criterion to form a distinguished set of atoms and proclaim it to be a rock. A discretization procedure must be supplied that will do this. The output of such a procedure is a mental construct that we perceive as a rock and call a rock; it is such constructs that we count.
I like your use of an algorithm to sharpen the question, but since it’s discretization that’s fundamental to my point, not the counting of objects, let’s consider an object recognition algorithm. These exist; supplied with an image of the beach with a rock on it, such an algorithm may be able to circle the rock accurately. We can imagine a really exact snapshot of information about all the atoms and their bonds at the beach (subject to the quantum mechanical constraints that aren’t the issue here and will be ignored), and a much more powerful algorithm of essentially the same kind that will process such a snapshot and delineate the rock for us.
I claim that such an algorithm comes with hardcoded constants that are specific to the human-style discretization procedure, and have no “natural” importance in reality. A scale parameter you mention is one such constant, but to my mind there are many more. For example, one way in which such an algorithm might determine the boundary of an object is by considering differences in matter density (air is much less dense than rock); but even though it may compute and compare the densities, there’s no natural way for it to decide which difference is significant and which is not. The choice of threshold will be appropriate to what our crude human senses are able to perceive, and what counts as significant in human experience. Or consider the inherently fuzzy boundary of the rock, at which the atomic behavior is very different from the interior of the rock (atoms leave in some quantities, air molecules seep in, perhaps there’s an oxide layer, perhaps some vapor’s forming...). When we perceive the rock as an object, we just don’t care about that; and the algorithm will have to decide in some manner: “this layer of atoms with very different behavior is small enough to be considered the boundary layer of the object rather than, say, analyzed as a completely different and separate object”. Well, the “small enough” is another threshold set at human values.
I disagree with you that “there are ways for an algorithm to decide on” appropriate values of all such constants/thresholds/parameters that does not utilize any human-specific values. I can see how this just might barely be possible with only the scale parameter, through the use of some ostensibly-objective constraints. E.g. a nano-creature faced with the rock will just “see” billions upon untold billions of separate “objects”, but, you might argue, if we require the algorithm to end up with a small number of recognized objects (i.e. just the one rock), it will be able to dial the scale parameter to roughly the human scale w/o hardcoding any human-specific numbers. But even that is something like a cop-out, because it’s not clear why we should expect to find a small number of objects rather than 10^10 of them; and in any case, I believe that if you properly account for the many separate parameters you need to tweak (or constants you need to set), this trick won’t work anyway.
I think a stronger point for your argument could be made by directly contrasting bringing things together and separating the world into things . Spatial separation is probably the first thing humans learn to do in order to count (and our vision apparently does this automatically for small numbers of things), and that can be followed by learning more abstract ways of discretizing. Spatially putting things next to each other to count them is valid so long as the separation remains to keep them distinct. Mentally moving our viewpoint is equivalent to moving the objects being counted next to each other. Glomming things back into a whole is what does not work, as you pointed out. Making it clear that the separation is what allowed the things to be counted in the first place seems like the most important thing to me.
I like your first point.
I’m not sure what you’re saying with your second point. There are algorithms that can count the number of rocks on a beach (which take as input the graph of bonds between atoms. If you have an objection to counting bonds, we can instead talk about algorithms that count heaps of rocks based on the proximity of atoms). You could make any of the following objections to “three rocks”:
There is no canonical such algorithm, because such an algorithm needs a scale parameter. I would respond that there are ways for an algorithm to decide on an appropriate scale parameter.
There are many distinct such algorithms which agree when there are obviously three rocks, but disagree on where the boundary between “three rocks” and “two rocks” is. I would respond that this objection only applies to ambiguous arrangements of rocks.
There is a canonical family of rock-counting algorithms, but the fact that we care about rock-counting algorithms is a fact about ourselves, not a fact about the external world. I agree with this.
My second point is all about discretization. The ability to count the number of rocks comes after the ability to say “this is a rock”, and I argue that the reality does not come equipped with a “natural” criterion to form a distinguished set of atoms and proclaim it to be a rock. A discretization procedure must be supplied that will do this. The output of such a procedure is a mental construct that we perceive as a rock and call a rock; it is such constructs that we count.
I like your use of an algorithm to sharpen the question, but since it’s discretization that’s fundamental to my point, not the counting of objects, let’s consider an object recognition algorithm. These exist; supplied with an image of the beach with a rock on it, such an algorithm may be able to circle the rock accurately. We can imagine a really exact snapshot of information about all the atoms and their bonds at the beach (subject to the quantum mechanical constraints that aren’t the issue here and will be ignored), and a much more powerful algorithm of essentially the same kind that will process such a snapshot and delineate the rock for us.
I claim that such an algorithm comes with hardcoded constants that are specific to the human-style discretization procedure, and have no “natural” importance in reality. A scale parameter you mention is one such constant, but to my mind there are many more. For example, one way in which such an algorithm might determine the boundary of an object is by considering differences in matter density (air is much less dense than rock); but even though it may compute and compare the densities, there’s no natural way for it to decide which difference is significant and which is not. The choice of threshold will be appropriate to what our crude human senses are able to perceive, and what counts as significant in human experience. Or consider the inherently fuzzy boundary of the rock, at which the atomic behavior is very different from the interior of the rock (atoms leave in some quantities, air molecules seep in, perhaps there’s an oxide layer, perhaps some vapor’s forming...). When we perceive the rock as an object, we just don’t care about that; and the algorithm will have to decide in some manner: “this layer of atoms with very different behavior is small enough to be considered the boundary layer of the object rather than, say, analyzed as a completely different and separate object”. Well, the “small enough” is another threshold set at human values.
I disagree with you that “there are ways for an algorithm to decide on” appropriate values of all such constants/thresholds/parameters that does not utilize any human-specific values. I can see how this just might barely be possible with only the scale parameter, through the use of some ostensibly-objective constraints. E.g. a nano-creature faced with the rock will just “see” billions upon untold billions of separate “objects”, but, you might argue, if we require the algorithm to end up with a small number of recognized objects (i.e. just the one rock), it will be able to dial the scale parameter to roughly the human scale w/o hardcoding any human-specific numbers. But even that is something like a cop-out, because it’s not clear why we should expect to find a small number of objects rather than 10^10 of them; and in any case, I believe that if you properly account for the many separate parameters you need to tweak (or constants you need to set), this trick won’t work anyway.
You’re right!
I think a stronger point for your argument could be made by directly contrasting bringing things together and separating the world into things . Spatial separation is probably the first thing humans learn to do in order to count (and our vision apparently does this automatically for small numbers of things), and that can be followed by learning more abstract ways of discretizing. Spatially putting things next to each other to count them is valid so long as the separation remains to keep them distinct. Mentally moving our viewpoint is equivalent to moving the objects being counted next to each other. Glomming things back into a whole is what does not work, as you pointed out. Making it clear that the separation is what allowed the things to be counted in the first place seems like the most important thing to me.