The reward system of science

Adèle Paul-Hus (School of Library and Information Science (EBSI), Université de Montréal, Montréal, Canada)
Nadine Desrochers (School of Library and Information Science (EBSI), Université de Montréal, Montréal, Canada)
Sarah de Rijcke (Center for Science and Technology Studies (CWTS), Leiden University, Leiden, The Netherlands)
Alexander D. Rushforth (Center for Science and Technology Studies (CWTS), Leiden University, Leiden, The Netherlands)

Aslib Journal of Information Management

ISSN: 2050-3806

Article publication date: 18 September 2017

2993

Citation

Paul-Hus, A., Desrochers, N., de Rijcke, S. and Rushforth, A.D. (2017), "The reward system of science", Aslib Journal of Information Management, Vol. 69 No. 5, pp. 478-485. https://doi.org/10.1108/AJIM-07-2017-0168

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Emerald Publishing Limited


The reward system of science

At the end of the 1950s, Robert K. Merton (1957, 1973) formalized the idea of a reward system of science. Within the Mertonian framework, the scientific ethos is mainly comprised of four institutional norms: universalism, communism, disinterestedness, and organised scepticism. Its basic precepts are derived from the scientific institution’s main objective, the “extension of certified knowledge” (1973, p. 270). According to Merton (1957), “the institution of science has developed an elaborate system for allocating rewards to those who variously live up to its norms” (p. 642) as they strive to participate in this institutional objective. The notion of recognition can be broadly defined as “the giving of symbolic and material rewards” (Merton, 1973, p. 429) by scientific peers; it is attributed to researchers who contribute to the advancement of scientific knowledge through their original work. Recognition therefore lies at the foundation of this reward system and constitutes, in the Mertonian view, both a driving force behind researchers’ actions and the pillar upon which scientific careers are – or at least can be – built.

Many decades down the line, the transformations of scholarly communication have led to important modifications in the landscape of scientific recognition, which has traditionally been based on authorship and citations but now can also extend to the growing use of social media within the academic context. Furthermore, as well as the emergence of a multitude of new phenomena in the social practices of scientific reward and evaluation, a number of theoretical developments in science studies – at times derived from quantitative or qualitative research – have emerged since the publication of Merton’s (1973) famous work. One such transformation has been a “practice turn” in the study of science: whereas Merton focussed on the institutionalized patterns of reward, from peer review to prizes, contemporary science studies have sought to delve into knowledge production by observing sites of modern science and research. This has important implications for the study of the reward system of science as it stands today, and can bring to light how patterns of rewards and incentives shape the very knowledge which can, and is, being produced.

Quantitative science studies and the bibliometrics field have yielded an enormous range of metrics that, today, are often separated from their more qualitative, practice-oriented counterparts. This has important consequences, given the fact that such developments may transform not only the ability to gain insight and understanding of reward dynamics, but might potentially also be transforming the very systems themselves. Questions surrounding measures based on easily available large-scale data, sometimes disconnected from the actual reality of research activity, are certainly emerging in the literature. Therefore, and although a clear interest for the host of new data sources and modes of communication has increased in recent times, the extent of their influence, and their validity and reliability as evaluative and analytic tools, must continue to be probed.

Although we cannot hope to index or summarise all the changes affecting the reward system of science, we believe the papers united here speak in many different ways to a number of these developments and transformations.

The team

The guest editorial team for this special issue unites two important poles in the study of scholarly communication and research evaluation, the Centre for Science and Technology Studies, Leiden University (the Netherlands), and the Canada Research Chair on the Transformations of Scholarly Communication, University of Montreal (Canada). We are researchers at various points of our careers; we further stem from various disciplinary backgrounds, including anthropology, library and information science, literature, science and technology studies, sociology, and theatre, and each bring our own perspective to the special issue. In our previous work, we have used a variety of toolboxes to study and discuss different aspects of scientific recognition. We have built upon the conceptual frameworks of Pierre Bourdieu, Blaise Cronin, Lucien Karpik, Karin Knorr Cetina, Bruno Latour, and Robert K. Merton. We have combined these frameworks with methodological approaches ranging from ethnographic observation to qualitative content analysis and bibliometric measures, individually and at times in combination. We have touched upon an array of topics relating to the reward system of science: authorship, acknowledgements, the various types of symbolic capital bestowed within the scientific reward system, current practices of knowledge production, responsible research evaluation, and metrics uses (e.g. Desrochers et al., 2016, 2017; Desrochers, Paul-Hus, Haustein, Costas, Mongeon, Quan-Haase, Bowman, Pecoskie, Tsou and Larivière, forthcoming; Fochler and De Rijcke, 2017; Larivière et al., 2016; Paul-Hus et al., 2017; Rushforth and de Rijcke, 2015, 2016; Hammarfelt et al., 2016; de Rijcke et al., 2016).

With this special issue, we wished to bring together some of the concerns that seem to be of particular interest to academics today in navigating the turbulent seas of the reward system of science in their own discipline and at each stage of their own career. We received 22 manuscripts and ten papers were accepted for publication following a peer review process performed by 32 external reviewers, for a final acceptance rate of 45 per cent.

Themes and trends

A brief analysis of the keywords provided by the authors to describe their contribution offers an informative portrait of the content of the special issue. The reward system of science – mentioned in four papers, along with the related keywords “currencies of science” (1), “monetary reward” (1), and reward triangle (1) – appears as a dominant theme, as expected. Research evaluation also emerges as a prominent topic of investigation with the keywords “research evaluation” (4), “impact”(1), “peer review” (1), “performance indicators” (1), “REF” (1), and “research quality” (1). Writing and publishing are also important themes in this special issue, as reflected by keywords such as “academic writing” (1), “cash-per-publication” (1), “predatory publishing” (1), “publication cultures” (1), “publishing motivation” (1), and “publishing oligopoly” (1).

More broadly, the academic context, central to this issue, appears through such keywords as “academia” (1), “academic careers” (1), “career stage” (1), “higher education” (1), “geopolitics of the academy” (1), and “research profession” (1). Conversely, specificity is found in different forms of scientific recognition explored in a certain number of papers, as demonstrated by the following keywords: “authorship” (1), “contributorship” (1), “acknowledgements” (2), and “subauthorship” (1). In terms of methods, “bibliometrics” (2), “scientometrics” (1), “citation analysis” (1), and their tools are mentioned in some papers, along with the sources of data used, such as “Web of Science” (3), “Elsevier” (1), and “DataCite” (1). Finally, “data sharing” (1) and “Mendeley readership” (1) are also present, as they are analysed with regards to their potential integration within the reward system of science.

While the keywords certainly reflect the thematic coherence of the issue, it is in the conceptual and theoretical frameworks used in the contributions that the mosaic begins to reveal its richness and diversity. Given the focus of the issue, it is not surprising to see Merton (1973, 1988) cited in six of the ten papers. Whitley’s work on the social and intellectual organisation of academia (e.g. Whitley, 2000; Whitley et al., 2010) is cited in four papers, while Cronin (e.g. Cronin, 1995, 2001; Cronin and Weaver-Wozniak, 1993; Cronin et al., 2003, 2004) is cited in three papers. Bourdieu (1975, 1986, 1988) and Knorr Cetina (1999) are cited in two papers each. Other conceptual and theoretical frameworks include the works of Hagstrom (1965), Latour and Woolgar (1986), Karpik (2010), and Ziman (1987, 2000), each cited in one paper.

Looking now at the methodologies of the studies included in this special issue, we see again an array of approaches, with an almost perfect balance between quantitative and qualitative designs. While four papers adopted a quantitative approach, including bibliometric analyses (Costas et al., 2017; Mongeon et al., 2017; Quan et al., 2017; Sundling, 2017), qualitative content analysis, of assessment reports or interviews with academics, were used in three papers (Hammarfelt, 2017; Hangel and Schmidt-Pfister, 2017; Mcculloch, 2017). Two papers proposed conceptual frameworks based on literature reviews and theoretical analyses (Díaz-Faes and Bordons, 2017; Perez Vico and Hallonsten, 2017), and one paper offered an overarching perspective on the publishing context (Stöckelová and Vostal, 2017).

In summary, the topics, conceptual frameworks, and methodological approaches covered in the special issue provide a snapshot of how a sample of researchers stemming from various traditions view the current state of the reward system of science. The perspectives of these researchers emerge, indirectly, through the toolboxes they use, the outputs they analyse, and the people they portray. However, it is also important to see the collective picture drawn by the concerns, warnings, and recommendations conveyed through the theoretical outlooks they propose, the paths to new indicators they devise, the conclusions they draw, the opinions and hopes they offer. Indeed, the contributions included in this issue are also the voices of academics who currently work and evolve within this system, and look to amass its rewards while making sense of its historical background, its demands and contradictions, its contemporary reality. Therefore, rather than simply outline the special issue by reproducing the contents of the papers which constitute it, we looked at their conclusion sections in order to extract the driving forces and position statements that reflect the authors’ concerns.

Here are some of the things they had to say.

Money does not just talk, it writes

According to the oft-quoted Goodhart’s Law, whether in its original phrasing or its more common adage form, once a measure itself becomes the goal, it ceases to perform well as a measure (Newton, 2011). Indicators like journal impact factors and rankings are increasingly mobilised as “technologies of government” (Miller and Rose, 1990, p. 7) to steer and modify “from a distance” the behaviours and decisions of researchers. Indeed, how bibliometric indicators and other tools are being exercised in the current context of academic audit and economic pressure seems precisely about establishing goals and benchmarks against which researchers are expected to adjust their behaviours. This is made clear in several contributions to the special issue.

Quan et al. (2017) tell us of the “cash-per-publication” model in place in the Chinese institutions since the late 1990s. This business-inspired model can astonish many of us since “the amount of cash reward for publications is much higher than university professors’ annual salaries” (p. 499). This puts a clear focus on increasing productivity, which becomes the crux of recognition and therefore of the reward system as a whole in the institutions concerned. This has seemingly contributed to “a historic high” in articles’ corrections after their publication (Quan et al., 2017, p. 498). Furthermore, even though some quality-based criteria have been put in place, warnings are presented by the authors due to an “abusive use of bibliometric indicators”, along with an almost-exclusive reliance by many policies on the journal impact factor and the Web of Science, thereby excluding “millions of papers published in Chinese journals” (Quan et al., 2017, p. 498).

While such policies may seem extreme, one can ponder whether their foundational motives are so different from those of the research excellence framework (REF) currently in place in England. Here, according to Mcculloch (2017), expectations push researchers “to align their writing practices with a neoliberal culture that fundamentally misunderstands the nature of the scholarly writing process as an easily reproducible technical skill rather than a difficult, creative and rather unpredictable endeavour” (p. 512). This is detrimental, in the author’s view, to the “forms of knowledge creation” (Mcculloch, 2017, p. 512) that are not rewarded in the REF scheme and points to the effects this can have on all aspects of research, from topics to methods, as well as how this can affect, namely, “risky or innovative research” (Mcculloch, 2017, p. 513). It is also seen as a hindrance to (particularly younger) researchers trying to build their resumes in institutions that do not value research as much as teaching – thereby limiting the time and resources available for the former. These tensions create a fundamental contradiction that can go so far as to create a climate of fear when academics’ choices become “not really choices at all, since writing towards REF-driven targets is something academics have to do not only in order to progress in their career, but also to keep their current job and avoid sanction” (Mcculloch, 2017, p. 513).

In the face of for-profit oligopolies, the “entrenched geopolitical asymmetry” between western and reputed “local” standards, as well as the rise of predatory publishing (along with the type of practices it fosters), Stöckelová and Vostal (2017, p. 524) use the Czech context to call for “more non-profit publications and qualifying initiatives cultivated by academic communities and institutions […] out of the Anglophone West” and for “community-based initiatives [to] be specifically supported within the EU framework schemes in order to strengthen the open access (OA)” p. 524). They wish for more “geopolitically inclusive” western platforms and for “dedicated attention [to] be paid to geopolitical issues and entrenched asymmetries in play” (Stöckelová and Vostal, 2017, p. 525). They end with a plea to all academics involved to “nourish transversal, truly OA connections and relations”, away from the “ghettos” or artificial solutions that can threaten inclusion endeavours (Stöckelová and Vostal, 2017, p. 525).

Hangel and Schmidt-Pfister (2017) look at different types of motivations to publish throughout academic career stages; not surprisingly, they use words like “uncertainty” (doctoral students; p. 534) and “survival” (post-doctoral researchers). The authors are eloquent in stating what many academics feel: “the tension between wanting and having to publish reveals a shift from publishing as a consequence of getting interesting research results, to doing research in order to publish” (Hangel and Schmidt-Pfister, 2017, p. 541). While “extensive strategically oriented pressures do not (yet) completely override researchers’ epistemic motivations to publish” (Hangel and Schmidt-Pfister, 2017, p. 541), the authors warn against the particular effects of the pressures to publish on more vulnerable groups “in unsecure positions”, such as “postdocs and new professors” (Hangel and Schmidt-Pfister, 2017, p. 541). So while this may not be related to money per se, support is most needed where the question of a secure job in academia – and one of basic financial survival as well – is concerned.

Do not use what you do not understand

The deluge of new data sources that accompanies and supports the transformations of academic writing, reading, and results sharing in the current digital context has opened the door to new sets of indicators. However, the heterogeneity, validity, and reliability of these metrics continue to generate discussions regarding their meaning and value in the reward system of science, as demonstrated by the following contributions to the special issue.

Mongeon et al. (2017) show that while data sharing could become a part of the reward system of science, current practices on platforms such as DataCite demonstrate that “self-reported tools” “are still more of a promise than a reality” (p. 553). Furthermore, at this time, there is no concrete measure of “the impact of this output”, Mongeon et al. (2017, p. 553) which also remains one of broad “diversity and heterogeneity” (Mongeon et al., 2017, p. 553). Further steps should therefore be made with a view to “recognize responsible practices and open science, ensuring greater transparency and data reuse” (Mongeon et al., 2017, p. 553).

The same can be said of Mendeley-based indicators, for which “in-depth research about the differences and similarities between the distribution of [readership and citations] across fields, and empirical support for the suitability of field normalization of Mendeley readership are still lacking” (Costas et al., 2017, p. 568). As noted by the authors, there are situations where a “currency conversion” between the two metrics could become “a bone of contention in the determination of scientific recognition” (Costas et al., 2017, p. 569). Further research, here, is once more needed to “disentangle the potential values” (Costas et al., 2017, p. 570) of readerships and citations in terms of what these indicators claim to measure.

Vagueness has surrounded another potential indicator, acknowledgements, for many decades. It has often been said that acknowledgements can highlight “the contribution of acknowledged individuals, which may deserve recognition” (Díaz-Faes and Bordons, 2017, p. 585). As Díaz-Faes and Bordons (2017) remind us, acknowledgements reveal what otherwise cannot be seen in the scholarly communication process, such as “author interactions” in the humanities, where “single-authored papers are the norm” (p. 586); yet the authors warn us once more that “we need to improve our understanding of acknowledgement practices, which will support decision making about the interest, convenience and manner of using these indicators in research performance assessments” (Díaz-Faes and Bordons, 2017, p. 586). They also adopt a stance whereby “[s]etting measures to foster the standardization of acknowledgement data so far as authors, journals and databases are concerned would contribute to enhance the scope and reliability of the studies” (Díaz-Faes and Bordons, 2017, p. 587).

Sundling (2017), in looking at the age-old tension between authorship and acknowledgements in terms of recording contributions, notes that there is “a disconnect in author attribution between traditional author guidelines and scientific practice” (p. 604) for contributions such as “providing compounds and samples, or expertise and advice” (p. 604), as well as an “arbitrariness” in the bestowment of authorship status in other cases (p. 604) – something to which much of the literature surrounding this topic can testify, despite the established supremacy of the authorship status in the reward system of science. The author therefore calls for “the need to extend the normal use of co-authorship as a proxy for collaboration, to also include individuals and organizations mentioned in the acknowledgments section” (Sundling, 2017, pp. 591-606) in order to reflect the current lab practices. Once again, while the potential indicator is there, the questions on how to use it and what to measure with it remain unanswered.

If context counts, should not it matter?

The importance of context has been brought to the forefront of science and technology studies recently, and context seems, indeed, to be very much on the minds of certain authors contributing to this special issue.

If potential indicators and their use can create uncertainty, the weight of this uncertainty is perhaps never more felt than when a whole career is at stake, as is the case in the evaluation of applicants to academic positions. As Hammarfelt’s (2017) work shows, while criteria can span multiple fields, the subtlety may lie in “the emphasis placed on these criteria” (p. 619), making it clear that “disciplinary differences do have great influence on evaluation procedures” (p. 619). The relationship to “temporality and trajectoral thinking” (Hammarfelt, 2017, p. 620) is also brought forth as potentially discipline specific. Perhaps more importantly still, Hammarfelt (2017) reminds us that the criteria themselves should be subject to evaluation since, and contrarily to what sometimes appears like popular academic belief, “the actual tools and devices used to make these criteria tangible and comparable are distinct and not easily generalised” (p. 620).

And what of indicators meant to measure the relationship between research and society? As Perez Vico and Hallonsten (2017) suggest, “it is, of course, also naïve to think that empirical studies, no matter how well-equipped with theoretical tools, can do full justice to the organizational complexity of contemporary academic work on micro-level, or the societal impact of academia in every thinkable aspect” (p. 634). The authors posit that, “impact is something far more complex than quantitatively oriented performance assessment can capture” (Perez Vico and Hallonsten, 2017, p. 625) and call not only for the long-heralded “mixes of qualitative and quantitative methods”, but also for “further conceptual refinement” in the imbrication of theoretical frameworks or toolboxes (including their own) and empirical studies (Perez Vico and Hallonsten, 2017, p. 634). They end their paper with a warning that context and adaptation are key in applying their framework – a warning that is oft-repeated in the literature but not, seemingly, always heeded, since it bears repeating.

Looking back, thinking ahead

If these voices have one collective concern, it seems it could be summed up in one word: respect. Respect for varying contexts, needs, topics, methods; respect for the academic process of research, its transparency, its accountability; but also respect for the people entrusted with the pursuit of knowledge and their choices, away from the pressures of non-realistic and unduly rigid measures of recognition. The current reward system of science seems to have little forgiveness for subject niches, new pursuits, innovative methods, autonomy of judgement, dead ends, or mistakes – even the most natural, evolutional mistakes, like the ones linked to the state of knowledge, its contexts, and the experimentations that come with research; it is as though we have forgotten that the Earth was once thought to be flat. These voices raise a collective warning that undue pressure on academics can ultimately cause the very concept of research to lose its meaning as investigation, exploration, and the quest for understanding; it becomes less about results than about delivering them in a certain way, in certain venues, and according to certain pre-set standards.

Most academic articles pave the way for and even at times demand “further research”. Will the further research announced in this issue be pursued? Especially given the fact that it concerns us all directly?

This special issue is a reality check, but also a call to arms and integrity. We hope that putting together this rich mosaic of papers, which was made possible by a dedicated group of authors and reviewers, leads to new discussions on the barriers and reinforcements of values that foster responsible reward systems in science. While the Mertonian view of the reward system of science and its norms can be seen as somewhat too idealistic in the current context, there is something to be said for ideals in times of imbalance or turbulence. A new reading of some of these texts, not as abstract norms, but as the reminders of the community and knowledge we build in our endeavours, may push us to become more collectively reflexive about the tools we use. If this is indeed an era of financial pressures, misunderstood metrics, and loss of contextualisation, such ideas may inspire new ways to promote awareness, in order to then generate more responsiveness in concrete and situated practices. In short, as is so often the case in science, we are always invited to look back if it can help us think ahead.

References

Bourdieu, P. (1975), “The specificity of the scientific field and the social conditions of the progress of reason”, Social Science Information, Vol. 14 No. 6, pp. 19-47.

Bourdieu, P. (1986), “The forms of capital”, Handbook of Theory and Research for the Sociology of Education, Greenwood, Westport, CT, pp. 241-258.

Bourdieu, P. (1988), Homo Academicus, Stanford University Press, Stanford, CA.

Costas, R., Perianes-Rodríguez, A. and Ruiz-Castillo, J. (2017), “On the quest for currencies of science: field ‘exchange rates’ for citations and Mendeley readership”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 557-575.

Cronin, B. (1995), The Scholar’s Courtesy: The Role of Acknowledgement in the Primary Communication Process, Taylor Graham, London.

Cronin, B. (2001), “Hyperauthorship: a postmodern perversion or evidence of a structural shift in scholarly communication practices?”, Journal of the American Society for Information Science and Technology, Vol. 52 No. 7, pp. 558-569.

Cronin, B. and Weaver-Wozniak, S. (1993), “Online access to acknowledgements”, in Williams, M.E. (Ed.), Proceedings of the Fourteenth National Online Meeting 1993, Learned Information, Inc., New York, NY; and Medford, NJ, pp. 93-98, 4-6 May.

Cronin, B., Shaw, D. and La Barre, K. (2003), “A cast of thousands: coauthorship and subauthorship collaboration in the 20th century as manifested in the scholarly journal literature of psychology and philosophy”, Journal of the American Society for Information Science and Technology, Vol. 54 No. 9, pp. 855-871.

Cronin, B., Shaw, D. and La Barre, K. (2004), “Visible, less visible, and invisible work: patterns of collaboration in 20th century chemistry”, Journal of the American Society for Information Science and Technology, Vol. 55 No. 2, pp. 160-168.

de Rijcke, S., Wouters, P.F., Rushforth, A.D., Franssen, T.P. and Hammarfelt, B. (2016), “Evaluation practices and effects of indicator uses – a literature review”, Research Evaluation, Vol. 25 No. 2, pp. 161-169.

Desrochers, N., Paul-Hus, A. and Pecoskie, J. (2017), “Five decades of gratitude: A meta-synthesis of acknowledgements research”, Journal of the Association for Information Science and Technology.

Desrochers, N., Paul-Hus, A. and Larivière, V. (2016), “The angle sum theory: exploring the literature on acknowledgments in scholarly communication”, in Sugimoto, C.R. (Ed.), Theories of Informetrics and Scholarly Communication, De Gruyter Mouton, Berlin, pp. 225-247.

Desrochers, N., Paul-Hus, A., Haustein, S., Costas, R., Mongeon, P., Quan-Haase, A., Bowman, T.D., Pecoskie, J., Tsou, A. and Larivière, V. (forthcoming), “Authorship, citations, acknowledgments, and visibility in social media: symbolic capital in the multifaceted reward system of science”, Social Science Information.

Díaz-Faes, A.A. and Bordons, M. (2017), “Making visible the invisible through the analysis of acknowledgements in the humanities”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 576-590.

Fochler, M. and De Rijcke, S. (2017), “Implicated in the indicator game? An experimental debate”, Engaging Science, Technology, and Society, No. 3, pp. 21-40, doi: 10.17351/ests2017.108.

Hagstrom, W.O. (1965), The Scientific Community, Basic Books, New York, NY.

Hammarfelt, B. (2017), “Recognition and reward in the academy: valuing publication oeuvres in biomedicine, economics and history”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 607-623.

Hammarfelt, B., De Rijcke, S. and Rushforth, A.D. (2016), “Quantified academic selves: the gamification of science through social networking services”, Special Issue: Information Research on Critical Perspectives on Social Media Research, Vol. 21 No. 2, p. SM1.

Hangel, N. and Schmidt-Pfister, D. (2017), “Why do you publish? On the tensions between generating scientific knowledge and publication pressure”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 529-544.

Karpik, L. (2010), Valuing the Unique: The Economics of Singularities, Princeton University Press, Cambridge, NJ.

Knorr Cetina, K. (1999), Epistemic Cultures: How the Sciences Make Knowledge, Harvard University Press, Cambridge, MA.

Larivière, V., Desrochers, N., Macaluso, B., Mongeon, P., Paul-Hus, A. and Sugimoto, C. (2016), “Contributorship and division of labor in knowledge production”, Social Studies of Science, Vol. 46 No. 3, pp. 417-435.

Latour, B. and Woolgar, S. (1986), Laboratory Life: The Construction on Scientific Facts; With A New Postscript and Index by the Authors, Princeton University Press, Princeton, NJ.

Mcculloch, S. (2017), “Hobson’s choice: the effects of research evaluation on academics’ writing practices in England”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 503-515.

Merton, R.K. (1957), “Priorities in scientific discovery: a chapter in the sociology of science”, American Sociological Review, Vol. 22 No. 6, pp. 635-659.

Merton, R.K. (1973), The Sociology of Science: Theoretical and Empirical Investigations, Chicago University Press, Chicago, IL.

Miller, P. and Rose, N. (1990), “Governing economic life, economy and society”, Economy and Society, Vol. 19 No. 1, pp. 1-31, available at: http://dx.doi.org/10.1080/03085149000000001

Mongeon, P., Robinson-Garcia, N., Jeng, W. and Costas, R. (2017), “Incorporating data sharing to the reward system of science: linking DataCite records to authors in the Web of Science”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 545-556.

Newton, A.C. (2011), “Implications of Goodhart’s Law for monitoring global biodiversity loss”, Conservation Letters, Vol. 4 No. 4, pp. 264-268, available at: http://dx.doi.org/10.1111/j.1755-263X.2011.00167.x

Paul-Hus, A., Mongeon, P., Sainte-Marie, M. and Larivière, V. (2017), “The sum of it all: revealing collaboration patterns by combining authorship and acknowledgements”, Journal of Informetrics, Vol. 11 No. 1, pp. 80-87, available at: http://dx.doi.org/10.1016/j.joi.2016.11.005

Perez Vico, E. and Hallonsten, O. (2017), “A resource- and impact-based micro-level conceptualization of collaborative academic work”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 624-639.

Quan, W., Chen, B. and Shu, F. (2017), “Publish or impoverish: an investigation of the monetary reward system of science in China (1999-2016)”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 486-502.

Rushforth, A.D. and de Rijcke, S. (2015), “Accounting for impact? The journal impact factor and the making of biomedical research in the Netherlands”, Minerva, Vol. 53, pp. 117-139, available at: http://dx.doi.org/10.1007/s11024-015-9274-5

Rushforth, A.D. and de Rijcke, S. (2016), “Quality monitoring in transition: the emerging challenge of evaluating translational research programs in academic biomedicine”, Science and Public Policy, pp. 1-11.

Stöckelová, T. and Vostal, F. (2017), “Academic stratospheres-cum-underworlds: when highs and lows of publication cultures meet”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 516-528.

Sundling, P. (2017), “The many hands of science: commonalities and differences in the research contributions of authors and subauthors”, Aslib Journal of Information Management, Vol. 69 No. 5, pp. 591-606.

Whitley, R. (2000), The Intellectual and Social Organization of the Sciences, Oxford University Press, Oxford.

Whitley, R., Gläser, J. and Engwall, L. (2010), Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and Their Consequences for Intellectual Innovation, Oxford University Press, Oxford.

Ziman, J. (1987), Knowing Everything about Nothing. Specialization and Change in Research Careers, Cambridge University Press, Cambridge.

Ziman, J. (2000), Real Science: What it is, and what it means, Cambridge University Press, Cambridge.

Further reading

Merton, R.K. (1968), “The Matthew effect in science”, Science, Vol. 159 No. 3810, pp. 56-63.

Acknowledgements

The guest editors would like to thank all the authors for submitting their work and contributing to this special issue, as well as the 32 reviewers for their time and valuable expertise.

Related articles