Recall that the neural net in function 1 is a classifier, not making predictions about the relationships between the variables. All function 1 does is take a domain of a large set of input signals and correlate them to a smaller range of internal variables. Technically you don’t need a neural net to do this, you can hard code it in various ways, but I like thinking of the original Perceptron diagram in my head for this.
The point of using this toy model rather than just assuming an ordering over world states is to show that any modeling of world state is produced by particular functions using real world data. This encoding itself is what generates the epistemic problems, because the encoding of a semantic meaning to a particular signifier always creates some uncertainty when using that encoding as a reference point. In the toy model, X can be arbitrarily complex phenomenal experience, it encompasses every observable state of the world, so even for phantom values of X all we’re doing is extrapolating to the experience we’d expect in a given situation. By creating a function which gives us the relationship between different values of X, we can make a plan on how to achieve a specific value of X that we want, so if you take current and expected conditions of X, the input, and a function Ω, you can have a plan to achieve a goal. The ordering function therefore orders these possible plans from best to worst based on whatever arbitrary criteria.
Recall that the neural net in function 1 is a classifier, not making predictions about the relationships between the variables. All function 1 does is take a domain of a large set of input signals and correlate them to a smaller range of internal variables. Technically you don’t need a neural net to do this, you can hard code it in various ways, but I like thinking of the original Perceptron diagram in my head for this.
The point of using this toy model rather than just assuming an ordering over world states is to show that any modeling of world state is produced by particular functions using real world data. This encoding itself is what generates the epistemic problems, because the encoding of a semantic meaning to a particular signifier always creates some uncertainty when using that encoding as a reference point. In the toy model, X can be arbitrarily complex phenomenal experience, it encompasses every observable state of the world, so even for phantom values of X all we’re doing is extrapolating to the experience we’d expect in a given situation. By creating a function which gives us the relationship between different values of X, we can make a plan on how to achieve a specific value of X that we want, so if you take current and expected conditions of X, the input, and a function Ω, you can have a plan to achieve a goal. The ordering function therefore orders these possible plans from best to worst based on whatever arbitrary criteria.