This discourse structure associates related claims and evidence, [...]
To make it practically possible for non-experts to efficiently make sense of large, spread-out collections of data (e.g. to answer some question about the discourse on some given topic), it’s probably necessary to not only rapidly summarize all that data, but also translate it into some easily-human-comprehensible form.
I wonder if it’s practically possible to have LMs read a bunch of data (from papers to Twitter “discourse”) on a given topic, and rapidly/on-demand produce various kinds of concise, visual, possibly interactive summaries of that topic? E.g. something like this, or a probabilistic graphical model, or some kind of data visualization (depending on what aspect of what kind of topic is in question)?
I agree strongly with the benefit of fluent, trustworthy (perhaps customised/contextualised) summarisation. I think LMs are getting there with this, and we should bank on (and advocate/work for) improvements to that kind of capability. Probably costly right now to produce things bespoke, but amortising that by focusing on important, wide-reach content might be quite powerful.
Part of the motive for the discussion here of structure mapping (inference and discourse) is that this epistemic structure metadata can be relatively straightforward to validate, just very time consuming for humans to do. But once pieced together, it should offer useful foundation for all sorts of downstream sense making (like the summarisation you’re describing here).
To make it practically possible for non-experts to efficiently make sense of large, spread-out collections of data (e.g. to answer some question about the discourse on some given topic), it’s probably necessary to not only rapidly summarize all that data, but also translate it into some easily-human-comprehensible form.
I wonder if it’s practically possible to have LMs read a bunch of data (from papers to Twitter “discourse”) on a given topic, and rapidly/on-demand produce various kinds of concise, visual, possibly interactive summaries of that topic? E.g. something like this, or a probabilistic graphical model, or some kind of data visualization (depending on what aspect of what kind of topic is in question)?
I agree strongly with the benefit of fluent, trustworthy (perhaps customised/contextualised) summarisation. I think LMs are getting there with this, and we should bank on (and advocate/work for) improvements to that kind of capability. Probably costly right now to produce things bespoke, but amortising that by focusing on important, wide-reach content might be quite powerful.
Part of the motive for the discussion here of structure mapping (inference and discourse) is that this epistemic structure metadata can be relatively straightforward to validate, just very time consuming for humans to do. But once pieced together, it should offer useful foundation for all sorts of downstream sense making (like the summarisation you’re describing here).