Authors
Mrinmaya Sachan, Kumar Dubey, Eric Xing, Matthew Richardson
Publication date
2015/7
Conference
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Pages
239-249
Description
Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system’s ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the correctness of the answer is evident. Since the structure is latent, it must be inferred. We present a unified max-margin framework that learns to find these hidden structures (given a corpus of question-answer pairs), and uses what it learns to answer machine comprehension questions on novel texts. We extend this framework to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension. Evaluation on a publicly available dataset shows that our framework outperforms various IR and neuralnetwork baselines, achieving an overall accuracy of 67.8%(vs. 59.9%, the best previously-published result.)
Total citations
201520162017201820192020202120222023202431591481211732
Scholar articles
M Sachan, K Dubey, E Xing, M Richardson - Proceedings of the 53rd Annual Meeting of the …, 2015