The Most Exciting Natural Language Processing Research of 2019 So Far The Most Exciting Natural Language Processing Research of 2019 So Far
The data revolution isn’t just about numbers, as researchers are teaching machines how to process natural language as data. The evolving capacity that machines... The Most Exciting Natural Language Processing Research of 2019 So Far

The data revolution isn’t just about numbers, as researchers are teaching machines how to process natural language as data. The evolving capacity that machines have to interpret human speech, whether written or spoken, opens new possibilities for the interactions between computers and people. Below, we have highlighted some of the most exciting NLP research that has been done so far this year in the field of natural language processing.

A New Approach to Text Normalization

According to researchers Adrian Javaloy Bornas of the Max Planck Institute for Intelligent Systems and Ginés García Mateos of the University of Murcia, text normalization is an overlooked issue hindering the development of natural language processing. Typically, scientists in this field have relied on “deep learning models that try to learn how to solve the problems from the data itself,” but the costs and the amount of data needed to accurately solve issues relating to language processing have been a barrier to further progress.

In their paper “A Character-Level Approach to the Text Normalization Problem Based on a New Causal Encoder,” Bornas and Mateos have devised a new approach to resolving abnormal text. Their model relies exclusively on neural networks, specifically a causal factor encoding (CFE) architecture. Their experiment found that their model comes close to state-of-the-art deep learning systems in terms of accuracy, and demonstrates significant room for growth.

[Related Article: The Art of Conversational and Domain-Specific AI]


Recommendations Through Conversation

Creating better chatbots begins with a more diverse and expansive set of databases from which dialogue can be extracted. In their paper, “Towards Deep Conversational Recommendations” a team of researchers outlines their creation of a chatbot designed to provide specific movie recommendations. Their experiment began with the creation of a dialogue database. “Until now,” they write, “there has been no publicly available large-scale dataset consisting of real-world dialogues centered around recommendations. To address this issue and to facilitate our exploration here, we have collected REDIAL, a dataset consisting of over 10,000 conversations centered around the theme of providing movie recommendations.”

Using this data, they created a recommendation model that using RNNs and multi-level encoding, ultimately resulting in a highly accurate “autoencoder-based recommendation engine.” Their experiment began from a cold start and tested the machine’s capacity to provide accurate and relevant recommendations. The results indicated that the model outperformed baseline estimates.


Recommendations for Scientific Literature

Recommendation-oriented services are useful beyond the realm of basic consumer goods like movies. One more specialized application of this technology might be in addressing the proliferation of published scientific literature, which has become overwhelming. As Robin Brochier of the Université de Lyon puts it in his paper “Representation Learning for Recommender Systems with Application to the Scientific Literature,” “For the researcher facing this deluge of information, it has become difficult, if not impossible, to conduct regular and exhaustive monitoring of his areas of expertise.”

In a step toward remedying this problem, he has created a recommendation system that operates within the realm of scientific literature archives and provides recommendations based on such factors as relevant citations, co-authors, and other measures of credibility. Brochier’s system goes beyond the features offered by simple search engines by using an unsupervised machine-learning model to comb through and evaluate the vast quantities of unstructured data represented by the archives of scientific literature.

[Related Article: The Promise of Retrofitting: Building Better Models for Natural Language Processing]


Analyzing How Computers Process Language

A team of researchers based out of Seoul University has looked into the esoteric process of how deep convolutional networks interpret natural language. In order to peer into this veritable “black box,” they created “a simple but highly effective concept alignment method that can discover which natural language concepts are aligned to each unit in the representation,” thereby bringing to light the method by which these neural networks decode the given data. The team drew on recent research in the field of computer vision in which intermediate layers of deep networks are examined for concept-association, extending this concept beyond word-association to evaluate the “meaningful building blocks of natural language,” and thereby provide “insights into how various linguistic features are encoded by the hidden units of deep representations.” Their findings are outlined in their paper, “Discovery of Natural Language Concepts in Individual Units of CNNs.”



In order to deal with the vast quantity of written content on the internet and elsewhere, companies will need advanced models to interpret, sort, and analyze natural language. The research described above indicates the growth potential of NLP technology and the multiple directions in which its continued advancement may lead. Where do you think the next NLP research developments will be?


Luke Coughlin

Luke Coughlin