Through a novel neural network approach, researchers from Korea have enhanced the accuracy of machine comprehension question answering.
Researchposted by Kaylen Sanders, ODSC July 26, 2018 Kaylen Sanders, ODSC
The research team hails from Korea University as well as NCSOFT, a South Korean video game developer. Their newly developed question-aware sentence-gating networks allow for sentence-level information to be taken into account during word-level encoding.
The ability for machines to derive meaning from text is no longer some far-off fantasy, but being able to answer questions based on a passage requires a level of semantic reasoning that extends beyond mere lookup and basic compositionality. Typically, a question-answering task is a matching game, whereby the machine aligns the word-level encoding vectors in the question with a corresponding set of word-level vectors in the passage. Yet, to effectively find the correct answer, a machine should synthesize sentence-level and even paragraph-level context, making sense of the higher-level semantics these grammatical structures entail. While prior methods have concentrated on making improvements to word-level understanding, this new research implements networks that “directly impose the sentence-level information into individual word encoding vectors.”ranking chwilowek
To form the question-aware sentence-gating networks, the researchers created encoding vectors of individual words, in which the original word vector was combined with its complementary sentence vector. Model performance was measured with the Exact Match (EM) score, which identifies the ratio of questions where the model prediction aligns with the ground truth answer. The sentence-gating model exceeded the EM of the established baseline by 1.53%, suggesting that a blend of word and sentence context results in more accurate answer predictions.
As evidenced by the experiment, the sentence-gating technique provides words with greater contextual information, which enriches their semantic standing in the text. Looking into the future, the researchers see potential for the success of sentence gating to be replicated among other machine comprehension tasks. Automatic summarization, machine translation, and named entity recognition are among the various applications that could benefit from more powerful comprehension tools.
Find out more here.

Kaylen Sanders, ODSC
I currently study Computational Linguistics as an M.S. candidate at Brandeis University. I received my Bachelor's degree from the University of Pittsburgh where I explored linguistics, computer science, and nonfiction writing. I'm interested in the crossroads where language and technology meet.
How To Create Trust Between AI Builders and AI Users
Featured Postposted by ODSC Community May 26, 2023
The Most Popular In-Person Sessions from ODSC East 2023
East 2023Conferencesposted by ODSC Team May 26, 2023
Is an AI Coding Assistant Right For You?
Modelingposted by ODSC Team May 26, 2023