Authors
Ivan Habernal, Iryna Gurevych
Publication date
2016
Conference
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Pages
1214-1223
Publisher
Association for Computational Linguistics
Description
This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multilabeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new corpus UKPConvArg2 and the accompanying software under permissive licenses to the research community.
Total citations
20172018201920202021202220232024102220242413226