Highly thought provoking post. Thanks a lot as always EY. Here’s what I got provoked. ”There are socks on my feet.” means “a bunch of fundamental quanta arranged sock-wise surround the two groups of fundamental quanta arranged foot-wise, that causally interacts via a bunch of quanta arranged nerve-wise, with the bunch of quanta arranged brain-wise with average spatial center x y z.” where (x,y,z) is the average center coordinate of my brain. “All snow is white.” means “If you arrange a group of fundamental quanta snow-wise, then that group must also be arranged white wise.” “Shoes are not fundamental.” means “One fundamental quanta cannot be arranged shoe-wise.” “Electrons are fundamental.” means “You can arrange one fundamental quanta electron wise.” And probably most importantly, “Shoes don’t exist.” means “there is no group of fundamental quanta in the universe that is arranged shoe-wise.” So, clearly, “Shoes are not fundamental” does not imply “Shoes don’t exist.” I get the feeling that if more ant-reductionists didn’t think that they could substitute “non-existent”, for “non-fundamental” much of the uncomfort they experience with the reductionist thesis would go away.
The problem I come to is figuring out how it is that we tell the difference between quanta arranged x wise, and quanta not arranged x wise. I also presume that for at least some categories, such as hand, a given group of quanta are not simply arranged hand wise or not hand wise, some sets of quanta are arranged more hand wise than others.
Thing space might help determine the actual categories of the empirical world, but I’m not sure it can help us with understanding how neural networks sort particulars. At best it can help us with how they would if they were the best scientists they could be. But if you show that there is some way to tell given a group of quanta that is arranged categorical-neural-network-wise hooked up to quanta arranged sensing apparatus wise, and given another unspecified group of quanta within the range of the sensing apparatus, to what degree that unspecified group triggers the neural network, all in terms of the relative properties of the fundamental quanta involved, you provide a proof of existence for an algorithm that sorts all quanta groups into any neurally definable category.
This lets us make some sort of super empirical syllogism:
(1):If a quanta group triggers the categorical neural network A, x amount, then it also triggers the categorical neural network B, y amount.
(2):If a quanta group triggers the categorical neural network B, y amount, then it also triggers the neural category C, z amount.
(c):If a quanta group triggers the neural network A, x amount, then it triggers the neural network C, z amount.
Where a quanta group is a complete specification of all the quanta involved and all of their relative positions over a time interval. We can say that 1 is as much as a neural category can be triggered by a quanta group, and 0 is as little as it can be triggered. So given that “if a quanta group triggers neural category A, x amount, then it also triggers neural category B, (1 - x) amount” we say that B is the compliment of A. The thing is that this theory of category only works for one agent at a time, not the entirety of a linguistic community. My categorical network for “dog” is probably very different from yours on the level of neurons, never-mind the level of fundamental quanta, but they are activated to very similar degrees by identical quanta groups. There is some objective amount of expected error in approximating the degree of activation of my “dog” category using yours, but it can’t be too much since we still manage to communicate effectively about dogs.
I’m sure I have something more to say about all this, but I have HW. Maybe I’ll write a post later after I’ve collected my thoughts a bit more if this comment goes over well.
Highly thought provoking post. Thanks a lot as always EY. Here’s what I got provoked.
”There are socks on my feet.” means “a bunch of fundamental quanta arranged sock-wise surround the two groups of fundamental quanta arranged foot-wise, that causally interacts via a bunch of quanta arranged nerve-wise, with the bunch of quanta arranged brain-wise with average spatial center x y z.” where (x,y,z) is the average center coordinate of my brain. “All snow is white.” means “If you arrange a group of fundamental quanta snow-wise, then that group must also be arranged white wise.” “Shoes are not fundamental.” means “One fundamental quanta cannot be arranged shoe-wise.” “Electrons are fundamental.” means “You can arrange one fundamental quanta electron wise.” And probably most importantly, “Shoes don’t exist.” means “there is no group of fundamental quanta in the universe that is arranged shoe-wise.” So, clearly, “Shoes are not fundamental” does not imply “Shoes don’t exist.” I get the feeling that if more ant-reductionists didn’t think that they could substitute “non-existent”, for “non-fundamental” much of the uncomfort they experience with the reductionist thesis would go away.
The problem I come to is figuring out how it is that we tell the difference between quanta arranged x wise, and quanta not arranged x wise. I also presume that for at least some categories, such as hand, a given group of quanta are not simply arranged hand wise or not hand wise, some sets of quanta are arranged more hand wise than others.
Thing space might help determine the actual categories of the empirical world, but I’m not sure it can help us with understanding how neural networks sort particulars. At best it can help us with how they would if they were the best scientists they could be. But if you show that there is some way to tell given a group of quanta that is arranged categorical-neural-network-wise hooked up to quanta arranged sensing apparatus wise, and given another unspecified group of quanta within the range of the sensing apparatus, to what degree that unspecified group triggers the neural network, all in terms of the relative properties of the fundamental quanta involved, you provide a proof of existence for an algorithm that sorts all quanta groups into any neurally definable category.
This lets us make some sort of super empirical syllogism:
(1):If a quanta group triggers the categorical neural network A, x amount, then it also triggers the categorical neural network B, y amount. (2):If a quanta group triggers the categorical neural network B, y amount, then it also triggers the neural category C, z amount. (c):If a quanta group triggers the neural network A, x amount, then it triggers the neural network C, z amount.
Where a quanta group is a complete specification of all the quanta involved and all of their relative positions over a time interval. We can say that 1 is as much as a neural category can be triggered by a quanta group, and 0 is as little as it can be triggered. So given that “if a quanta group triggers neural category A, x amount, then it also triggers neural category B, (1 - x) amount” we say that B is the compliment of A. The thing is that this theory of category only works for one agent at a time, not the entirety of a linguistic community. My categorical network for “dog” is probably very different from yours on the level of neurons, never-mind the level of fundamental quanta, but they are activated to very similar degrees by identical quanta groups. There is some objective amount of expected error in approximating the degree of activation of my “dog” category using yours, but it can’t be too much since we still manage to communicate effectively about dogs.
I’m sure I have something more to say about all this, but I have HW. Maybe I’ll write a post later after I’ve collected my thoughts a bit more if this comment goes over well.