If QNRs produced by disparate format are used to train a neural net, then I’d guess they’d be translated to a common format first.
There would likely be multiple models of books, some generated with human guidance, and others generated to optimize a variety of predictions.
Maybe some models would be optimized for answering: what does the author believe about X?, as evaluated by a service that’s designed to evaluate those answers.
Some models might be constructed mostly by a system that takes info about the reputations of works that the book cites, and infers reliability estimates for each of the book’s claims, by aggregating the reliability of the citations that support each claim.
Possibly you’re confused because you’re imagining a more restrictive set of rules than Drexler intends for composing QNRs. He’s using rules that are much more general than what’s typically used for creating syntax trees of natural language. See section 8.1.2 for some hints. But I don’t see a clear answer to this kind of confusion.
Reading 8.1.2, this post, some of the rest of the paper and Drexler’s blog post helped in understanding QNRs. I think I see some of what he’s getting at, but if there’s a unified core to all this then I can’t crisply define it, or even generate enough examples in my head that I think they could be used to interpolate the rest of the space of possible QNRs.
You’re partly on the right track.
If QNRs produced by disparate format are used to train a neural net, then I’d guess they’d be translated to a common format first.
There would likely be multiple models of books, some generated with human guidance, and others generated to optimize a variety of predictions.
Maybe some models would be optimized for answering: what does the author believe about X?, as evaluated by a service that’s designed to evaluate those answers.
Some models might be constructed mostly by a system that takes info about the reputations of works that the book cites, and infers reliability estimates for each of the book’s claims, by aggregating the reliability of the citations that support each claim.
Possibly you’re confused because you’re imagining a more restrictive set of rules than Drexler intends for composing QNRs. He’s using rules that are much more general than what’s typically used for creating syntax trees of natural language. See section 8.1.2 for some hints. But I don’t see a clear answer to this kind of confusion.
Reading 8.1.2, this post, some of the rest of the paper and Drexler’s blog post helped in understanding QNRs. I think I see some of what he’s getting at, but if there’s a unified core to all this then I can’t crisply define it, or even generate enough examples in my head that I think they could be used to interpolate the rest of the space of possible QNRs.