QNRs look cool for more general knowledge representation than both natural language and explicit semantic representations. But I notice that neither this QNR post nor Drexler’s prospectus mention drift. But drift (though you might want to call it “refinement”) is common both in natural languages (to adapt to growing knowledge) as well as in neural representations (“in mice”).
QNRs look cool for more general knowledge representation than both natural language and explicit semantic representations. But I notice that neither this QNR post nor Drexler’s prospectus mention drift. But drift (though you might want to call it “refinement”) is common both in natural languages (to adapt to growing knowledge) as well as in neural representations (“in mice”).
Related: The “meta-representational level” from Meditation and Neuroscience, some odds and ends