Authors
Rocco De Nicola, Marinella Petrocchi, Manuel Pratelli
Publication date
2021/11/1
Journal
Information Processing & Management
Volume
58
Issue
6
Pages
102685
Publisher
Pergamon
Description
For more than a decade now, academicians and online platform administrators have been studying solutions to the problem of bot detection. Bots are computer algorithms whose use is far from being benign: malicious bots are purposely created to distribute spam, sponsor public characters and, ultimately, induce a bias within the public opinion. To fight the bot invasion on our online ecosystem, several approaches have been implemented, mostly based on (supervised and unsupervised) classifiers, which adopt the most varied account features, from the simplest to the most expensive ones to be extracted from the raw data obtainable through the Twitter public APIs. In this exploratory study, using Twitter as a benchmark, we compare the performances of four state-of-art feature sets in detecting novel bots: one of the output scores of the popular bot detector Botometer, which considers more than 1,000 features of an …
Total citations
202220232024877
Scholar articles
R De Nicola, M Petrocchi, M Pratelli - Information Processing & Management, 2021