Authors
Eiman Tamah Al-Shammari
Publication date
2009/3/31
Conference
2009 WRI World Congress on Computer Science and Information Engineering
Volume
4
Pages
477-482
Publisher
IEEE
Description
In this paper, an algorithm to normalize noisy text, which only focuses on the Arabic language, is introduced. Although there have been many theories that discuss Arabic text processing, there has not been, so far, one theory that focuses on noisy Arabic texts. Additionally, this paper introduces a new similarity measure to stem Arabic noisy document. The need for such a new measure stems from the fact that the common rules applied in stemming cannot be applied on noisy texts, which do not conform to the known grammatical rules and have various spelling mistakes. Thus, the proposed normalization algorithm automatically group words after applying the similarity measure. In order to make sure of such a theory of algorithm, the new normalization technique is evaluated by the under-stemming errors reduction technique introduced by Paice.
Total citations
201020112012201320142015201620172018201920202021202221211111
Scholar articles
ET Al-Shammari - 2009 WRI World Congress on Computer Science and …, 2009