Authors
Archana Nottamkandath, Jasper Oosterman, Davide Ceolin, Wan Fokkink
Publication date
2014
Conference
10th International Workshop on Uncertainty Reasoning for the Semantic Web (URSW2014)
Pages
25-36
Publisher
CEUR-WS. org
Description
Cultural heritage institutions are employing crowdsourcing techniques to enrich their collection. However, assessing the quality of crowdsourced annotations is a challenge for these institutions and manually evaluating all annotations is not feasible. We employ Support Vector Machines and feature set selectors to understand which annotator and annotation properties are relevant to the annotation quality. In addition we propose a trust model to build an annotator reputation using subjective logic and assess the relevance of both annotator and annotation properties on the reputation. We applied our models to the Steve. museum dataset and found that a subset of annotation properties can identify useful annotations with a precision of 90%. However, our studied annotator properties were less predictive.
Total citations
20152016201720182019202020212022202311212123
Scholar articles
A Nottamkandath, J Oosterman, D Ceolin, W Fokkink - 10th International Workshop on Uncertainty Reasoning …, 2014