Is that the right framing? In principle the training data represents quite a lot of contact with reality if that’s where you sampled it from. Almost sounds like you’re saying current ML functionally makes you specify an ontology (and/or imply one through your choices of architecture and loss) and we don’t know how to not do that. But something conceptually in the direction of sparsity or parsimony (~simplest suitable ontology without extraneous parts) is still presumably what we’re reaching for, it’s just that’s much easier said than done?
Alternately, is there something broader you’re pointing at where we shouldn’t be trying to directly learn/train the right ontology, we should rather be trying to supply that after learning it ourselves?
Is that the right framing? In principle the training data represents quite a lot of contact with reality if that’s where you sampled it from. Almost sounds like you’re saying current ML functionally makes you specify an ontology (and/or imply one through your choices of architecture and loss) and we don’t know how to not do that. But something conceptually in the direction of sparsity or parsimony (~simplest suitable ontology without extraneous parts) is still presumably what we’re reaching for, it’s just that’s much easier said than done?
Alternately, is there something broader you’re pointing at where we shouldn’t be trying to directly learn/train the right ontology, we should rather be trying to supply that after learning it ourselves?