Authors
Chetan Arora, Mehrdad Sabetzadeh, Shiva Nejati, Lionel Briand
Publication date
2019/1/9
Journal
ACM Transactions on Software Engineering and Methodology (TOSEM)
Volume
28
Issue
1
Pages
1-34
Publisher
ACM
Description
Domain models are a useful vehicle for making the interpretation and elaboration of natural-language requirements more precise. Advances in natural-language processing (NLP) have made it possible to automatically extract from requirements most of the information that is relevant to domain model construction. However, alongside the relevant information, NLP extracts from requirements a significant amount of information that is superfluous (not relevant to the domain model). Our objective in this article is to develop automated assistance for filtering the superfluous information extracted by NLP during domain model extraction. To this end, we devise an active-learning-based approach that iteratively learns from analysts’ feedback over the relevance and superfluousness of the extracted domain model elements and uses this feedback to provide recommendations for filtering superfluous elements. We empirically …
Total citations
2019202020212022202320242391376
Scholar articles
C Arora, M Sabetzadeh, S Nejati, L Briand - ACM Transactions on Software Engineering and …, 2019