Letter to the Editor

, , , , ,

Online Information Review

ISSN: 1468-4527

Article publication date: 29 November 2011

470

Citation

Li, J., Sanderson, M., Willett, P., Norris, M., Oppenheim, C. and Jacso, P. (2011), "Letter to the Editor", Online Information Review, Vol. 35 No. 6. https://doi.org/10.1108/oir.2011.26435faa.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited


Letter to the Editor

Article Type: Letter to the Editor From: Online Information Review, Volume 35, Issue 6

We wish to jointly clarify and expand on the statement in the article “Pragmatic issues in calculating and comparing the quantity and quality of research through rating and ranking of researchers based on peer reviews and bibliometric indicators from Web of Science, Scopus and Google Scholar in Online Information Review, Vol. 34 No. 6, 2010, pp. 972-982.

In his article Péter Jacsó examined our paper (Li et al., 2010), which compared the utility of citation databases for correlating citation measures with expert judgments. As detailed in the paper, a panel of experts graded 101 leading library and information science (LIS) researchers on a five-point scale (0-4), and the median gradings resulting from these evaluations were then correlated with citation indicators from three different sources. Jacsó (2010) commented on some aspects of our study, part of which we expand on here.

Jacsó described re-calculating the median grades of the 101 researchers (as judged by the panel experts). Jacsó’s method of calculation resulted in 24 of the 101 median values being different from the median values listed in the original paper. One might conclude that this implies there was a mistake in the paper, however, this would be incorrect as the two methods of calculating the median were different. As noted in Section 3 of the paper, “each expert chose the researchers that they felt competent to evaluate (excluding themselves if they were on the list), and the median ranking was then computed for each researcher”. This overall median is the number reported in the W column of our main table. In the same section we later subdivided the panel of 42 experts into British (B), American (A) and Other (O) categories, so that:

Each of the 101 researchers hence had four peer review ratings, namely the median of the values assigned by the complete set of experts and by the experts from each of the three geographical groups. These ratings are listed in the left-hand part of Table 2 where it will be seen, e.g. that Judith Bar-Ilan’s ratings were 2 (for the whole panel, denoted by W), 4, 2.5 and 2 (for the B, A and O experts).

For this researcher, there were 17 expert judgments: three from the UK (gradings 4, 4, 1), six from the USA (gradings 1, 2, 3, 2, 3, 3) and eight from other countries (2, 2, 3, 2, 2, 2, 3, 2), giving the W, B, A and O median values presented in our paper. Jacsó, however, states that “they calculated this W median from the average of the grades received from the experts in three regions”. This difference in the method of calculating the median appears to explain the difference in the median values reported in Jacsó’s article and the original paper.

Jacsó commented in his article on the way that the table of academics was sorted in the published paper; the unusual sorting only affected the presentation of the data in the paper and did not affect the correlations reported. Of more concern were Jacsó’s highlighting that a small number of individual citation scores reported in the paper were inaccurate. We accept that there were a handful of such errors in the figures reported. The aim of this paper was to compare the utility of different citation databases at ranking LIS academics. The number of errors is small and will not have impacted in any substantial way on the correlations measured across all academics over a wide range of citation measures. We stand by the central claim of the paper, namely that there is little difference between the three databases – Web of Science, Google Scholar and Scopus – for ranking LIS academics by citation.

We are keen for others to work with this data. Therefore, we use this note to announce that the set of figures resulting from our work is available for download in.csv format from: www.seg.rmit.edu.au/mark/joi

Jiang Li, Mark Sanderson, Peter Willett, Michael Norris, Charles Oppenheim, Péter Jacsó

Corresponding author

Mark Sanderson can be contacted at: mark.sanderson@rmit.edu.au; Péter Jacsó can be contacted at: jacso@hawaii.edu

 

References

Jacsó, P. (2010), “Pragmatic issues in calculating and comparing the quantity and quality of research through rating and ranking of researchers based on peer reviews and bibliometric indicators from Web of Science, Scopus and Google Scholar”, Online Information Review, Vol. 34 No. 6, pp. 972–82

Li, J., Sanderson, M., Willett, P., Norris, M. and Oppenheim, C. (2010), “Ranking of library and information science researchers: comparison of data sources for correlating citation data, and expert judgments”, Journal of Informetrics, Vol. 4 No. 4, pp. 554–63

Related articles