The main problem I have with this type of reasoning is an arbitrary drawn ontological boundaries. Why IGF is “not real” and ML objective function is “real”, while if we really zoom in training process, the verifiable in positivist brutal way real training goal is “whatever direction in coefficient space loss function decreases on current batch of data” which seems to me pretty corresponding to “whatever traits are spreading in current environment”?
The main problem I have with this type of reasoning is an arbitrary drawn ontological boundaries. Why IGF is “not real” and ML objective function is “real”, while if we really zoom in training process, the verifiable in positivist brutal way real training goal is “whatever direction in coefficient space loss function decreases on current batch of data” which seems to me pretty corresponding to “whatever traits are spreading in current environment”?