The Evaluation of Research by Scientometric Indicators

Péter Jacsó (University of Hawaii)

Online Information Review

ISSN: 1468-4527

Article publication date: 13 April 2012

293

Keywords

Citation

Jacsó, P. (2012), "The Evaluation of Research by Scientometric Indicators", Online Information Review, Vol. 36 No. 2, pp. 324-325. https://doi.org/10.1108/14684521211240171

Publisher

:

Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited


Disclosure: I have known the author since the mid‐1980s when he was one of the few specialists, along with Tibor Braun, Andras Schubert and Wolfgang Glanzel, who worked for various libraries and information centres of the Hungarian Academy of Sciences (HAS) and provided information services from the citation indexes of the Institute for Scientific Information (now Thomson‐Reuters) for scientists and information professionals. I was just becoming interested in citation analysis, and using data from the Journal Citation Reports for optimising our collection development at the computer science library that I headed. HAS was the only organisation in the country that had access to this unique and expensive online database family. Since then, Vinkler went on to receive a doctorate in scientometrics, became associate editor of Scientometrics, wrote more than 60 papers about scientometric indicators, and now has published a book about assessing research performance through the prism of scholarly publications and citations that those publications have received. He was also the recipient of the 1999 Derek De Solla Price award, the highest recognition for bibliometric/scientometric research.

This book summarises in 14 chapters the varieties and the pros and cons of more than a dozen indicators that can be calculated from the number of publications and citations; quantifying the clout of the journals in which scholarly papers have been published and cited; aggregating these data at various levels, ranging from the individual scientist through the institutional to the national level; considering related factors, such as the self‐citation rate, the extent of multiple authorship, and the handling of these to produce indicators that can be used in granting tenure, promotion and awards, as well as making decisions regarding journals subscriptions, renewals and cancellations.

The timing for publication of this book is ideal, coinciding with the large scale measurable assessment projects of research performance at the individual, team and institutional level that we have seen that in the UK and Australia, and are in progress elsewhere in the academic world.

The number of indicators that Vinkler has developed and used in actual projects form the core of the book, with good explanations, 110 very good summary tables and case studies, all in an overwhelming alphabet soup of acronyms. Vinkler has developed and refined these indicators over the past 25 years. The acronym slang is mother tongue for him, but not for the typical scientists and administrators of academic institutions who are the primary targets of the book.

The work cannot be read in one sitting or two. Distinguishing and recalling the meaning of the indicator acronyms may be as difficult as eating and digesting genuinely Hungarian goulash and fisherman soup and other trademark elements of the cuisine. It would help to have a glossary in the inside front cover and back cover of the book.

Almost all the examples come from the field of chemistry, as Vinkler, like many other scientometricians, received his PhD in chemistry. It would help very much had he created two or three phantom journals (A, B and C), and have phantom, nicely rounded total publication counts (say 10), total citation counts (20,40, 50, 100), self‐citation counts (1,2,5,10), authorship counts (1, 2, 10) to illustrate the algorithms of the different indicators and their effect on their scores. Alternatively, having a website for the buyers of the book could offer mini spreadsheets with sample data used for the tables, allowing the reader to change the value of some variables.

There is only a short discussion of the h‐index, and its derivatives are barely mentioned. Vinkler has created his own variant for the h‐index to fix its shortcoming of ignoring the citations that the papers in the h‐core received beyond the h‐value. Vinkler's πv index represent a good idea. I understand that the symbol stands for the first name initial, and v for the last name initial, but the typography and the use of the symbol will backfire, because it is not easy to reproduce in the text and in the bibliography; more important, it may be a software nightmare for citation matching and sorting. I know this from my bad decision to use the accents in my name for patriotic reason, when even spelling the base characters in my name correctly is challenging for copy editors who spell it as “Jasco”. It is also strange that Vinkler does not refer to Leo Egghe's g‐index, which also addressed the same problem using a somewhat different solution. Several papers by Egghe appear in the 355‐item bibliography, but none that discuss the g‐index.

The book is a good choice for those scientists who deal with research assessment through citation metrics, but the soft cover edition of Henk Moed's excellent book, Citation Analysis in Research Evaluation came out earlier in 2011, and at a much lower price than Vinkler's work. Nicola De Bellis' book, Bibliometrics and Citation Analysis (2010) addresses some of the same issues as Vinkler (with more of a philosophical tone), but it does not have any of Vinkler's meticulously prepared, huge collection of very valuable tables of his test results.

Related articles