Authors
Jason Priem, Paul Groth, Dario Taraborelli
Publication date
2012/11/1
Journal
PloS one
Volume
7
Issue
11
Pages
e48753
Publisher
Public Library of Science
Description
What paper should I read next? Who should I talk to at a conference? Which research group should get this grant? Researchers and funders alike must make daily judgments on how to best spend their limited time and money–judgments that are becoming increasingly difficult as the volume of scholarly communication increases. Not only does the number of scholarly papers continue to grow, it is joined by new forms of communication from data publications to microblog posts. To deal with incoming information, scholars have always relied upon filters. At first these filters were manually compiled compendia and corpora of the literature. But by the mid-20th century, filters built on manual indexing began to break under the weight of booming postwar science production. Garfield [1] and others pioneered a solution: automated filters that leveraged scientists own impact judgments, aggregating citations as ‘‘pellets of peer recognition.’’[2]. These citation-based filters have dramatically grown in importance and have become the tenet of how research impact is measured. But, like manual indexing 60 years ago, they may today be failing to keep up with the literature’s growing volume, velocity, and diversity [3]. Citations are heavily gamed [4–6] and are painfully slow to accumulate [7], and overlook increasingly important societal and clinical impacts [8]. Most importantly, they overlook new scholarly forms like datasets, software, and research blogs that fall outside of the scope of citable research objects. In sum, citations only reflect formal acknowledgment and thus they provide only a partial picture of the science system [9]. Scholars may discuss, annotate …
Total citations
20132014201520162017201820192020202120222023202423225539304730404141286
Scholar articles
J Priem, P Groth, D Taraborelli - PloS one, 2012