4

How Should Different NLG Components Add Value?

 3 years ago
source link: https://ehudreiter.com/2018/05/17/nlg-components-add-value/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Uncategorized

How Should Different NLG Components Add Value?

I’ve seen a few discussions recently about where NLG “adds value” compared to non-NLG approaches to text generation, such as templates.  I wrote a related blog in 2016, I’d like to look at this again from the perspective of the different components of the NLG or data-to-text pipeline: data interpretation, document planning, microplanning, surface realisation.  From a very high-level perspective, what is the goal of each of these components; what “value does it add” compared to simpler approaches?   I’m taking a more researchy perspective than my previous blog; ie, focusing on things NLG will hopefully be able to do well in the 2020s, as opposed to things that NLG systems do well in 2018.

The D2T (data-to-text) pipeline also contains a data (signal) analysis module, but this is not distinctive or special to D2T/NLG, so I wont discuss it further here.

Data Interpretation: Articulate Analytics

Much (most?) of the value of a data-to-text system is in the information that it communicates.   This information originates in the Data Interpretation module, which look for useful insights and relationships to communicate to the reader.    Crucially, the analytics done in data-to-text system must produce “human-friendly” output, which makes it different from conventional analytics.  My colleague Yaji Sripada calls this “articulate analytics”.

To take a simple example from a paper about this which Yaji and I published in KDD many years ago, if a D2T system wants to describe a trend in a data set in a data-to-text context, it should use linear interpolation (where the start and end points of the line are real data points) rather than linear regression (using a best-fit line, whose endpoints may not be real data points).  Linear regression works well for most kinds of reasoning,  but D2T users will complain if the system says “temperature rose from 20 to 28 over the past hour” if the initial temperature was in fact 22, even if the best-fit regression line starts at 20.

There is of course a lot of interest in 2018 in explainable AI.  I think this touches on the same issue, ie the need to explain data and reasoning to people in way that is accessible and easy for people to understand.  Its great to see so much energy devoted to analytics which can be communicated, and I expect this will lead to much better articulate analytics.

Document Planning: Creating a Narrative

Almost everyone I talk to who uses an NLG or D2T system says that he or she wants the system to produce a story or narrative.  We are much better at understanding stories than abstract data; perhaps this is because (from an evolutionary perspective) we have had to understand stories for tens of thousands of years, but only within the past few hundred years have ordinary people needed to understand abstract data.  This manifests in many ways.  For example, we are really bad at understanding probabilities (which rarely occur in stories), and we aggressively look for causal links when we interpret what we are told (in a good story, most events have causal links to other events).

In a D2T/NLG system, producing a good narrative is primarily the responsibility of the document planner.  Currently, document planning in NLG systems is usually based on imitating corpus texts, via either learning or explicit scripts.   A few systems have used logic-based approaches for document planning, without (in my mind) a great deal of success.   We need psychologically-based approaches to narrative creation, and I’m happy to say that I’m seeing more research on this topic now than in previous years.   So hopefully we’ll have decent psychologically-based algorithms for narrative creation in the 2020s.

Microplanning: Grounding Language in the World

The microplanner needs to choose linguistic resources (words and syntactic constructs) to express information.  One key challenge and “value add” here is doing a good job of mapping real-world data onto words; this is what is sometimes called the “language grounding” problem.  For example, deciding what colour term should be used to express a particular RGB value; what time phrase should be used to express a particular clock time; and when a trend or other pattern be described as “significant”.

Language grounding is hard because it is very contextual.   There is not a simple mapping from RGB value to colour terms; the appropriate colour term depends on lighting conditions, visual context (eg, RGB values of neighbouring objects), and linguistic context.  It is also idiosyncratic; my daughter uses the word “purple” to describe objects which I call “pink”.

After being ignored for many years, there is now an upsurge of interest in language grounding, including using data-based methods to build word-choice classifiers and also using psychologically inspired models of perception and grounding.

Realisation: Automating Grammatical Correctness

The last pipeline stage is surface realisation.  In my mind, the value-add of realisation is that it automates grammatical processing so that system developers dont have to worry about this.   This can range from fairly simple things such as automating the choice of “a” or “an”, or automatically pluralising a word, to more complex tasks such as ordering adjectives.  Automating this kind of thing both reduces development effort and also increases text quality.

Surface realisation works pretty well in 2018 (although of course there is room for improvement!), as is shown by the existence of popular open-source realisers such as simplenlg.

Conclusion

My overall message is that NLG and D2T systems should produces texts that are easy for people to read and understand; this is the ultimate “value add” and differentiator from template systems.  This means that insights are presented and explained in a manner which makes sense to people; the text as a whole is structured as a story; appropriate words are used to communicate real-world data; and the rules of grammar are obeyed.    We’ve got a ways to go before we can fully achieve this vision, but I think the field is making good progress.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK