I suppose the learning process could work in a more legible way—for instance, it’s not clear why neural networks generalize successfully. But this seems to be more closely related to the theoretical understanding of learning algorithms than their knowledge representation.
I suppose the learning process could work in a more legible way—for instance, it’s not clear why neural networks generalize successfully. But this seems to be more closely related to the theoretical understanding of learning algorithms than their knowledge representation.