Top 10 AI Chatbot Research Papers from arXiv.org in 2019
Data Science Academic ResearchModelingChatbotsposted by Daniel Gutierrez, ODSC January 13, 2020 Daniel Gutierrez, ODSC
AI chatbots are a hot commodity right now and they constitute a fertile area of research for machine learning. Researchers from all over the globe are working hard to push the envelope for what we can expect from chatbots. In this article, I’ve scoured the arXiv.org pre-print server for the 10 most compelling AI chatbot research papers in 2019 thus far. It’s an impressive list! If even half of the proposed technologies see their way into products or developer tools, we’re in for a fun ride in 2020 and beyond.
[Related Article: Best NLP Research of 2019]
Feedback-Based Self-Learning in Large-Scale Conversational AI Agents
Today, most large-scale conversational AI agents (e.g. Alexa, Siri, or Google Assistant) are built using manually annotated data to train the different components of the system. Typically, the accuracy of the ML models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time consuming. In this paper, a group of Amazon researchers propose a system that leverages user-system interaction feedback signals to automate learning without any manual annotation. Users here tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by errors in ASR, NLU, ER or the application. In some cases, users may not properly formulate their requests (e.g. providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. The proposed self-learning system automatically detects the errors, generate reformulations and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. The results show that the approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers.
Enriching Conversation Context in Retrieval-based Chatbots
Work on retrieval-based chatbots, like most sequence pair matching tasks, can be divided into Cross-encoders that perform word matching over the pair, and Bi-encoders that encode the pair separately. The latter has better performance, however since candidate responses cannot be encoded offline, it is also much slower. Lately, multi-layer transformer architectures pre-trained as language models have been used to great effect on a variety of natural language processing and information retrieval tasks. Recent work has shown that these language models can be used in text-matching scenarios to create Bi-encoders that perform almost as well as Cross-encoders while having a much faster inference speed. This paper expands upon this work by developing a sequence matching architecture that takes into account contexts in the training dataset at inference time.
Chat-Bot-Kit: A web-based tool to simulate text-based interactions between humans and with computers
This ai chatbot research paper describes Chat-Bot-Kit, a web-based tool for text-based chats that we designed for research purposes in computer-mediated communication (CMC). Chat-Bot-Kit enables to carry out language studies on text-based real-time chats for the purpose of research: The generated messages are structured with language performance data such as pause and speed of keyboard-handling and the movement of the mouse. The tool provides two modes of chat communications – quasi-synchron and synchron modes – and various typing indicators. The tool is also designed to be used in wizard-of-oz studies in Human-Computer Interaction (HCI) and for the evaluation of chatbots (dialogue systems) in Natural Language Processing (NLP).
Unsupervised Context Rewriting for Open Domain Conversation
Context modeling has a pivotal role in open domain conversation. Existing works either use heuristic methods or jointly learn context modeling and response generation with an encoder-decoder framework. This paper proposes an explicit context rewriting method, which rewrites the last utterance by considering context history. The method leverages pseudo-parallel data and elaborate a context rewriting network, which is built upon the CopyNet with the reinforcement learning method. The rewritten utterance is beneficial to candidate retrieval, explainable context modeling, as well as enabling to employ a single-turn framework to the multi-turn scenario. The empirical results show that the model outperforms baselines in terms of the rewriting quality, the multi-turn response generation, and the end-to-end retrieval-based chatbots.
Contract Statements Knowledge Service for Chatbots
Towards conversational agents that are capable of handling more complex questions on contractual conditions, formalizing contract statements in a machine readable way is crucial. However, constructing a formal model which captures the full scope of a contract proves difficult due to the overall complexity its set of rules represent. Instead, this paper presents a top-down approach to the problem. After identifying the most relevant contract statements, their underlying rules are modeled in a novel knowledge engineering method. A user-friendly tool was developed for this purpose allows to do so easily and at scale. Then, the statements are exposed as service so they can get smoothly integrated in any chatbot framework.
Designing dialogue systems: A mean, grumpy, sarcastic chatbot in the browser
This ai chatbot research paper explores a deep learning-based dialogue system that generates sarcastic and humorous responses from a conversation design perspective. The researchers trained a seq2seq model on a carefully curated dataset of 3000 question-answering pairs, the core of our mean, grumpy, sarcastic chatbot. The work then went on to show that end-to-end systems learn patterns very quickly from small datasets and thus, are able to transfer simple linguistic structures representing abstract concepts to unseen settings. LSTM-based encoder-decoder model in the browser were also deployed, where users can directly interact with the chatbot. Human raters evaluated linguistic quality, creativity and human-like traits, revealing the system’s strengths, limitations and potential for future research.
Say What I Want: Towards the Dark Side of Neural Dialogue Models
Neural dialogue models have been widely adopted in various chatbot applications because of their good performance in simulating and generalizing human conversations. However, there exists a dark side of these models — due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in concerns about the security of practical chatbot services. This paper investigates whether we can craft inputs that lead a well-trained black-box neural dialogue model to generate targeted outputs. This is formulated as a reinforcement learning (RL) problem and train a Reverse Dialogue Generator which efficiently finds such inputs for targeted outputs. Experiments conducted on a representative neural dialogue model show that the proposed model is able to discover such desired inputs in a considerable portion of cases. Overall, the work reveals this weakness of neural dialogue models and may prompt further researches of developing corresponding solutions to avoid it.
InstructableCrowd: Creating IF-THEN Rules for Smartphones via Conversations with the Crowd
Natural language interfaces have become a common part of modern digital life. Chatbots utilize text-based conversations to communicate with users; personal assistants on smartphones such as Google Assistant take direct speech commands from their users; and speech-controlled devices such as Amazon Echo use voice as their only input mode. This paper introduces InstructableCrowd, a crowd-powered system that allows users to program their devices via conversation. The user verbally expresses a problem to the system, in which a group of crowd workers collectively respond and program relevant multi-part IF-THEN rules to help the user. The IF-THEN rules generated by InstructableCrowd connect relevant sensor combinations (e.g., location, weather, device acceleration, etc.) to useful effectors (e.g., text messages, device alarms, etc.). The study showed that non-programmers can use the conversational interface of InstructableCrowd to create IF-THEN rules that have similar quality compared with the rules created manually. InstructableCrowd generally illustrates how users may converse with their devices, not only to trigger simple voice commands, but also to personalize their increasingly powerful and complicated devices.
Recently, chatbots received an increased attention from industry and diverse research communities as a dialogue-based interface providing advanced human-computer interactions. On the other hand, Open Data continues to be an important trend and a potential enabler for government transparency and citizen participation. This paper shows how these two paradigms can be combined to help non-expert users find and discover open government datasets through dialogue.
[Related Article: The Most Influential Deep Learning Research of 2019]
#MeTooMaastricht: Building a chatbot to assist survivors of sexual harassment
Inspired by the recent social movement of #MeToo, this ai chatbot research paper describes the construction of a chatbot to assist survivors of sexual harassment cases (designed for the city of Maastricht but can easily be extended). The motivation behind this work is twofold: properly assist survivors of such events by directing them to appropriate institutions that can offer them help and increase the incident documentation so as to gather more data about harassment cases that are currently under-reported. The work breaks down the problem into three data science/machine learning components: harassment type identification (treated as a classification problem), spatio-temporal information extraction (treated as Named Entity Recognition problem) and dialogue with the users (treated as a slot-filling based chatbot). The researchers were able to achieve a success rate of more than 98% for the identification of a harassment-or-not case and around 80% for the specific type of harassment identification. Locations and dates are identified with more than 90% accuracy and time occurrences prove more challenging with almost 80%. Finally, initial validation of the chatbot shows great potential for the further development and deployment of such a benefit for the whole society tool.
Want to learn more about chatbots and other new data science research initiatives? Head to ODSC East in Boston this April 13-17 and learn from data scientists directly!