I wonder if you could do something similar with all peer-reviewed scientific publications, summarizing all findings into an encyclopedia of all scientific knowledge. Basically, each article in the wiki would be a review article on a particular topic. The AI would have to track newly published results, determine which existing topics in the encyclopedia they relate to or whether creating a new article is warranted, and update the relevant articles with the new findings.
Given how much science content humanity has accumulated, you’d probably have to have the AI organize scientific topics in a tree, with parent articles summarizing topics at a higher level of abstraction and child articles digging into narrower scopes more deeply. Or more generally, a directed acyclic graph to handle cross-disciplinary topics.
Maybe future versions of AI chatbots could use something like this as a shared persistent memory that all chatbot instances could reference as a common ground truth. The only trick would be getting the system to use sound epistemology and reliably report uncertainty instead of hallucinations.
“Make new articles from scratch” seems to me like the kind of noise-generation challenge where AI tends to perform more artistically than factually. “Translate this for a particular reader”, on the other hand, plays to its strengths. I notice that the original post seems to be gesturing at the former while you’re reifying it into the latter :)
With the right backend—and that might be a wiki format, or it might be something more structured under the hood—I suspect that current AI could do quite well at finding areas where pieces of research contradict one another.
I wonder if you could do something similar with all peer-reviewed scientific publications, summarizing all findings into an encyclopedia of all scientific knowledge. Basically, each article in the wiki would be a review article on a particular topic. The AI would have to track newly published results, determine which existing topics in the encyclopedia they relate to or whether creating a new article is warranted, and update the relevant articles with the new findings.
Given how much science content humanity has accumulated, you’d probably have to have the AI organize scientific topics in a tree, with parent articles summarizing topics at a higher level of abstraction and child articles digging into narrower scopes more deeply. Or more generally, a directed acyclic graph to handle cross-disciplinary topics.
Maybe future versions of AI chatbots could use something like this as a shared persistent memory that all chatbot instances could reference as a common ground truth. The only trick would be getting the system to use sound epistemology and reliably report uncertainty instead of hallucinations.
“Make new articles from scratch” seems to me like the kind of noise-generation challenge where AI tends to perform more artistically than factually. “Translate this for a particular reader”, on the other hand, plays to its strengths. I notice that the original post seems to be gesturing at the former while you’re reifying it into the latter :)
With the right backend—and that might be a wiki format, or it might be something more structured under the hood—I suspect that current AI could do quite well at finding areas where pieces of research contradict one another.