posted on 2019-05-16, 10:54authored byMarcel Ausloos, Olgica Nedic, Agata Fronczak, Piotr Fronczak
This paper introduces a statistical and other analysis of peer reviewers in order to approach their “quality” through some quantification measure, thereby leading to some quality metrics. Peer reviewer reports for the Journal of the Serbian Chemical Society are examined. The text of each report has first to be adapted to word counting software in order to avoid jargon inducing confusion when searching for the word frequency: e.g. C must be distinguished, depending if it means Carbon or Celsius, etc. Thus, every report has to be carefully “rewritten”. Thereafter, the quantity, variety and distribution of words are examined in each report and compared to the whole set. Two separate months, according when reports came in, are distinguished to observe any possible hidden spurious effects. Coherence is found. An empirical distribution is searched for through a Zipf–Pareto rank-size law. It is observed that peer review reports are very far from usual texts in this respect. Deviations from the usual (first) Zipf’s law are discussed. A theoretical suggestion for the “best (or worst) report” and by extension “good (or bad) reviewer”, within this context, is provided from an entropy argument, through the concept of “distance to average” behavior. Another entropy-based measure also allows to measure the journal reviews (whence reviewers) for further comparison with other journals through their own reviewer reports.
Funding
This paper is part of scientific activities in COST Action TD1306 New Frontiers of Peer Review (PEERE).
History
Citation
Scientometrics, 2016, 106, pp. 347-368
Author affiliation
/Organisation/COLLEGE OF SOCIAL SCIENCES, ARTS AND HUMANITIES/School of Business