Guest editorial: social media in scholarly communication

Stefanie Haustein (École de bibliothéconomie et des sciences de l’information (EBSI), Université de Montréal, Montréal, QC, Canada)
Cassidy Sugimoto (School of Informatics and Computing, Indiana University Bloomington, Bloomington, IN, United States)
Vincent Larivière (École de bibliothéconomie et des sciences de l’information (EBSI), Université de Montréal & Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, Montréal, QC, Canada)

Aslib Journal of Information Management

ISSN: 2050-3806

Article publication date: 18 May 2015

3030

Citation

Haustein, S., Sugimoto, C. and Larivière, V. (2015), "Guest editorial: social media in scholarly communication", Aslib Journal of Information Management, Vol. 67 No. 3. https://doi.org/10.1108/AJIM-03-2015-0047

Publisher

:

Emerald Group Publishing Limited


Guest editorial: social media in scholarly communication

Article Type: Guest editorial: social media in scholarly communication From: Aslib Journal of Information Management, Volume 67, Issue 3

1. Introduction

This year marks 350 years since the inaugural publications of both the Journal des Sçavans and the Philosophical Transactions, first published in 1665 and considered the birth of the peer-reviewed journal article. This form of scholarly communication has not only remained the dominant model for disseminating new knowledge (particularly for science and medicine), but has also increased substantially in volume. Derek de Solla Price – the “father of scientometrics” (Merton and Garfield, 1986, p. vii) – was the first to document the exponential increase in scientific journals and showed that “scientists have always felt themselves to be awash in a sea of the scientific literature” (Price, 1963, p. 15), as, for example, expressed at the 1948 Royal Society's Scientific Information Conference:

Not for the first time in history, but more acutely than ever before, there was a fear that scientists would be overwhelmed, that they would be no longer able to control the vast amounts of potentially relevant material that were pouring forth from the world's presses, that science itself was under threat (Bawden and Robinson, 2008, p. 183).

One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from “little science” to “big science”, was the introduction of citation indexing as a Wellsian “World Brain” (Garfield, 1964) of scientific information:

It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108).

In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of “research productivity” and “scientific quality”, creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).

Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters.

The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces – “polymorphous mentioning” (Cronin et al., 1998, p. 1320) – of scholars and their documents on the web to measure “impact” of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):

There will soon be a critical mass of web-based digital objects and usage statistics on which to model scholars' communication behaviors – publishing, posting, blogging, scanning, reading, downloading, glossing, linking, citing, recommending, acknowledging – and with which to track their scholarly influence and impact, broadly conceived and broadly felt (Cronin, 2005, p. 196).

A decade after Cronin's prediction and five years after the coining of altmetrics, the time seems ripe to reflect upon the role of social media in scholarly communication. This Special Issue does so by providing an overview of current research on the indicators and metrics grouped under the umbrella term of altmetrics, on their relationships with traditional indicators of scientific activity, and on the uses that are made of the various social media platforms – on which these indicators are based – by scientists of various disciplines.

2. Terminology and definition

The set of metrics commonly referred to as altmetrics are usually based on the measurement of online activity related to scholars or scholarly content derived from social media and Web 2.0 platforms. As such, they can be considered as a proper subset of webometrics and scientometrics. However, the definition of what constitutes an “altmetric” indicator is in constant flux, as it is largely determined by technical possibilities and, more specifically, the availability of application programming interfaces. The common denominator of various altmetrics is that they exclude and stand opposed to “traditional” bibliometric indicators (see e.g. Priem et al., 2010), and often include usage metrics – despite the fact that these indicators have been available much longer and are not based on social media platforms (Haustein, 2014). More recently, and quite inclusively, Priem (2014, p. 266) defined the field of altmetrics as the “study and use of scholarly impact measures based on activity in online tools and environments.”

2.1. The name debate: altmetrics, article level metrics, social media metrics – or just metrics?

There has been considerable debate and confusion surrounding the meaning of the term altmetrics. Many have seen PLOS's Article Level Metrics (ALM) program (Fenner, 2013) – the first major attempt to systematically provide numbers on papers' bookmarks on CiteULike and Connotea; mentions on blog posts, reader comments, and ratings; as well as citations, article views, and downloads – as synonymous with altmetrics. However, a criticism of article level metrics as being too constraining was bound up in the origin of the term altmetrics. As discussed above, Priem (2014) later broadened the definition to include scholarly impact measures available on any online platform.

The scientometric community quickly responded. Rousseau and Ye (2013, p. 2), claimed that altmetrics was “a good idea but a bad name” and proposed to replace it by influmetrics, which “suggest[s] diffuse and often imperceptible traces of scholarly influence – to capture the opportunities for measurement and evaluation afforded by the new environment” (Cronin, 2005, p. 176). The term was introduced by Davenport and initially discussed by Cronin and Weaver (1995) in the context of acknowledgments and webometrics. Emphasizing the origin of the data instead of intent or meaning of the new metrics, Haustein et al. (2014b) proposed the term social media metrics. However, with the changing landscape of platforms and definitions shaped by data collection methods of aggregators and vendors, the term social media metrics might be too restrictive. For example, Plum Analytics incorporated library holdings (Parkhill, 2013) and Altmetric.com monitors newspapers and have started to include mentions in policy documents (Liu, 2014). Thus, the heterogeneity and dynamicity of the scholarly communication landscape make a suitable umbrella term elusive. It may be time to stop labeling these terms as parallel and oppositional (i.e. altmetrics vs bibliometrics) and instead think of all of them as available scholarly metrics – with varying validity depending on context and function.

2.2. In search of meaning: interpreting and classifying various metrics

Data aggregators and providers like PLOS and ImpactStory were the first to categorize the types of impact based on data sources. PLOS, for instance, categorizes data sources as viewed, saved, discussed, cited, and recommended, assuming increasing engagement from viewing to recommending (Lin and Fenner, 2013). ImpactStory uses the same categories, but distinguishes between scholars and the public as two distinct audiences (Piwowar, 2012). However, these platform-based distinctions are quite general and, often, based on what they intend to measure rather than what is actually measured by the category of indicator. For example, ImpactStory categorizes HTML views as views by the public, while PDF downloads are considered as being made by scholars. Similarly, the platform considers tweets as being made by the general public, although many tweets associated with scientific papers are likely to come from researchers (Tsou et al., in press). More recently, social media metrics have been discussed in light of citation theories (i.e. normative, social constructivist approaches, and concept symbols) and social theories (i.e. social capital, attention economics, and impression management) to contribute to the understanding of the meaning of the various metrics (Haustein et al., in press). This has led to a framework that categorizes various acts related to research objects (i.e. scholarly documents and agents) into the three categories of access, appraise and apply, rather than classifying the indicators based on the tools and data sources from which they come.

3. Current research

Since the coining of the term altmetrics in 2010, there has been a proliferation of scholarship on the subject of social media and scholarly communication. We provide here a brief overview of the current research related to the use and role of social media in scholarly communication, as well as the metrics derived from this use.

3.1. Social media uptake and motivation in academia

A number of social media tools and platforms have been developed to allow for the dissemination and access of scholarship, the communication and interaction among scholars, and the presentation of profiles at various levels of aggregation (e.g. individual, journal, institution). Persistent questions in social media metrics have been the extent to which the platforms are used, why, and by whom – critical questions for appropriate generalization and decision making on the basis of these platforms.

High degrees of use of social media and networking tools have been demonstrated at the individual level with percentages as high as 75 percent (Tenopir et al., 2013) and 80 percent (Procter et al., 2010), although uptake varies among fields and by demographic characteristics (e.g. gender, age). However, the surveys reporting these numbers were highly inclusive – operationalizing social media and networking tools to include Skype and Wikipedia. Looking more closely at particular social media platforms shows substantial variation, with Google Scholar (Haustein et al., 2014c; Procter et al., 2010), collaborative authoring tools (Rowlands et al., 2011), and LinkedIn (Haustein et al., 2014c; Mas-Bleda et al., 2014) among the most popular. Lower rates have been found for other social media sites: rates of Twitter use for academics is around 10 percent (Grande et al., 2014; Procter et al., 2010; Pscheida et al., 2013; Rowlands et al., 2011), trailed by Mendeley (6 percent), Slideshare (4 percent), and Academia.edu (2 percent) (Mas-Bleda et al., 2014). However, many of the studies have been disciplinarily homogeneous and have shown extreme variation based on the population (e.g. the rates of tweeting among bibliometricians in Haustein et al., 2014c or the rates of blogging among academic health policy researchers in Grande et al., 2014).

Individual motivations to use social media for scholarly communication vary significantly by country (Mou, 2014; Nicholas et al., 2014), age (Nicholas et al., 2014), and across and within platforms (Mohammadi et al., in press-a). While some have touted the advantages for collaboration and the age-bias of social media, others have challenged these claims (Harley et al., 2010). The demographics of those who employ these technologies is also variable – e.g. Mendeley has been shown to be dominated by graduate students (Mohammadi et al., in press-b; Zahedi et al., 2013). Imbalances in terms of gender (Shema et al., 2012), level of education (Kovic et al., 2008), and disciplinary area (Shema et al., 2012) also call for cautious interpretations of the analyses derived from a single platform.

Academic institutions have implemented social media tools to varying degrees. Motivations for institutional use of social media ranges from faculty development (Cahn et al., 2013) to pedagogy (Kalashyan et al., 2013). Academic libraries have been early adopters of social media tools, with nearly all libraries maintaining an institutional Twitter and Facebook account as well as hosting a blog (Boateng and Quan Liu, 2014). Journals have also increasingly adopted social media tools, using commenting (Stewart et al., 2013), blogging (Kortelainen and Katvala, 2012; Stewart et al., 2013), and social networking (Kortelainen and Katvala, 2012).

3.2. Analysis of social media metrics

The majority of published studies on the topic have focussed on social media activity associated with journal articles. Most of these examine the extent to which scientific articles are visible on various platforms (coverage), the average attention they receive (mean event rate), and the degree to which the metrics correlate with citations and other metrics. In terms of signal, Mendeley (the social bookmarking platform) has been shown to be the dominant source, with levels of coverage as high as 50-70 percent in some disciplines (e.g. biomedical research and the social sciences) and nearly ubiquitous coverage for some journals (e.g. Nature, Science, JASIST, and PLOS journals) (Haustein et al., 2014b; Bar-Ilan, 2012; Li et al., 2012; Mohammadi et al., in press-b; Mohammadi and Thelwall, 2014; Priem et al., 2012). Other social reference managers such as CiteULike and BibSonomy capture less activity (Haustein and Siebenlist, 2011; Li et al., 2012); for example, 31 percent of PLOS articles were bookmarked on CiteULike compared to 80 percent on Mendeley (Priem et al., 2012). As shown with other metrics, there are certainly country-affiliation advantages (Sud and Thelwall, in press).

Coverage and mean event rates for Twitter have been shown to be lower than Mendeley – between 10 and 21 percent, depending on the study and corpus (Costas et al., in press; Haustein et al., 2015; Priem et al., 2012). There are also significant differences across fields (Haustein et al., 2014b) and subfields (Haustein et al., 2014e). Facebook has even lower rates of coverage (between 2.5 and 10 percent, depending on study), though the access to these events is limited to publicly available profiles and thus has high potential for missing data (Costas et al., in press; Haustein et al., 2015; Priem et al., 2012). Similarly, access to comprehensive data on the mention of articles in blogs has proven to be difficult. Current studies estimate between 2 and 8 percent coverage for this media (Costas et al., in press; Haustein et al., 2015; Priem et al., 2012), varying by discipline, journal, and open access policy (Fausto et al., 2012; Groth and Gurney, 2010). However, these are early years for social media metrics. Additional platforms are being developed and existing platforms are now recognized for their potential to inform scholarly assessment: for example, Goodreads (Zuccala et al., 2014; Zuccala et al., 2015) and Wikipedia (Evans and Krauthammer, 2011; Nielsen, 2007; Priem et al., 2012). One fairly recent advance has been the metricization of peer review, through systems such as F1000. While few studies have sought to examine coverage (with the exception of Priem et al., 2012), many authors have explored the various recommendation categories and levels (Waltman and Costas, 2014), disciplinary representation (Waltman and Costas, 2014), and correlations between these categories and other metrics (Bornmann, 2015; Waltman and Costas, 2014).

One constant has been the assumption that the validity and utility of new metrics can be tested through correlational analyses with traditional bibliometrics indicators (Li et al., 2012). The majority of results have reported mostly weak correlations between citations and various social media metrics (Bornmann and Leydesdorff, 2013; Costas et al., in press; Eysenbach, 2011; Fausto et al. 2012; Haustein et al., 2014b, d, 2015; Mohammadi and Thelwall, 2013; Thelwall et al., 2013), though some have found moderately strong positive relations (Haustein et al., 2014b; Nielsen, 2007). As with all metrics, strong variation is seen by the population under analysis (e.g. Shema and Bar-Ilan, 2014), making it difficult to generalize the results. Correlational analyses have also examined the relation among social media metrics – for example, between downloads and reference manager saves (Priem et al., 2012), tweets and downloads (Shuai et al., 2012), F1000 metrics and social media metrics (Bornmann, 2015; Li and Thelwall, 2012), blog posts and social media metrics (Allen et al., 2013), and F1000 metrics and expert assessment (Allen et al., 2009). The interpretation is always difficult—in case where correlations are positive and significant one questions whether the new metric is duplicative and therefore unnecessary; insignificant correlations may signal that something distinct has been measured (e.g. Sugimoto et al., 2008). A third option, significant negative correlations as shown in Haustein et al. (2014f) may be the strongest prediction of distinction among the measures.

3.3. Data reliability and validity

There has been considerable concern over the reliability and validity of social media metrics (e.g. Dinsmore et al., 2014; Nature Materials Editors, 2012). In a comprehensive survey of more than 15 tools used to generate social media metrics, Wouters and Costas (2012, p. 5) concluded that altmetrics need a “far stricter protocol of data quality and indicator reliability and validity” before they could be appropriately applied to impact assessment. Many of the concerns regard data collection techniques and the variability among sources and time of collection (Gunn, 2014; Neylon, 2014; Torres-Salinas et al., 2013), which affects replicability of the research. Methodological and statistical concerns are also paramount – there is a need to codify standard practices to the analysis of social media metrics (e.g. Sud and Thelwall, 2013).

Furthermore, the validity of these measures is called into question, given that the most tweeted papers, for example, often have funny titles, report curious topics (Haustein et al., 2014d), and refer to “the usual trilogy of sex, drugs, and rock and roll” (Neylon, 2014, para. 6). Social media metrics are often seen as positive indicators of public interest in science; however, these results are complicated by the lack of knowledge about the demographics of those utilizing the platforms and the presence of automated profiles (or bots) engaging in the system (Haustein et al., 2014a). Perhaps the most important criticism is the degree to which the focus on and proliferation of new metrics causes a displacement of attention from scholarship to social media performance (Gruber, 2014).

4. Contribution of this Special Issue

Out of the 22 submissions received, six papers were accepted, for an acceptance rate of 27 percent. The submitted contributions were reviewed by 37 external reviewers. The six accepted manuscripts are complementary to each other and fill some of the gaps currently found in the literature.

The first paper of the issue, authored by Rodrigo Costas, Zohreh Zahedi, and Paul Wouters from the Center for Science and Technology Studies (CWTS) of Leiden University, uses science maps to compare the visibility of papers on various social media platforms with scores obtained using traditional bibliometric indicators. Drawing on more than half a million papers published in 2011, they visualize which subjects areas (as presented in the map of science produced by CWTS) are popular on Twitter, Mendeley, Facebook, blogs, and mainstream news as captured by Altmetric.com. They highlight the similarity between citations and Mendeley readership in terms of the research areas in which the counts are most frequent, and show that, for most disciplines, readership counts exceed citation rates. This was especially true for the social sciences. The authors also show that papers in general medicine, psychology, and social sciences – fields that are considered to have greater social impact – are much more visible on Twitter than papers in other fields, which suggests that tweets could, to a certain extent, reflect impact on the general public. Mentions on less prevalent platforms such as Google+, blogs, and mainstream media show biases toward papers published in multidisciplinary journals such as Nature, Science, or PNAS. Regarding Mendeley, Costas, Zahedi, and Wouters conclude that, in the social sciences (much more than in the humanities and natural sciences), where the use of citations is more problematic, readership counts could be used as an alternative to citations as a marker of scientific impact.

Also using Altmetric.com data, Juan Pablo Alperin (Simon Fraser University) tackles another important issue in science indicators: the geographical bias of altmetrics. Using the metadata of papers indexed in the Latin American journal portal SciELO – which indexes more than 1,200 journals and half a million articles – he measures the coverage (i.e. proportion of articles with non-zero values) of Mendeley, Facebook, Twitter, and other metrics provided by Altmetric.com across the different disciplines and compares the results with those obtained in other studies that used “international” databases. He shows that papers indexed on SciELO obtain lower coverage than those of papers indexed in other databases, with scores close to zero in most cases. This was also true for the major Brazilian collection – the largest in SciELO. Alperin suggests three potential explanations for this: SciELO has a lower usage and, thus, a lower social media usage; social media use is lower in Latin America than elsewhere in the previously studied contexts; or Latin American has distinct practices of sharing research on social media. In sum, Alperin's results convincingly demonstrate that, in addition to discipline and topicality of papers, geography affects the visibility of papers on social media platforms.

The next paper, by Lutz Bornmann of the Max Planck Society, focuses on relationships among a subset of altmetrics (namely, tweets and Facebook scores) with various tags assigned by experts to papers on F1000, using a sample of 1,082 papers published in PLOS journals. Counts on Facebook and Twitter were significantly higher for papers tagged on F1000 as “good for teaching” than for papers without this tag. Bornmann also observes that the number of Mendeley readership counts is positively associated with the use of the tag “technical advance”, which is assigned to papers that are considered by experts as introducing a new practical or theoretical technique, and that the “new finding” tag is positively related with the number of Facebook posts. Using the tag “good for teaching” as a marker of the potential impact of a paper beyond specialized researchers of a discipline, Bornmann argues that Twitter and Facebook counts, but not those from Figshare or Mendeley, might be useful for measuring the social impact of research.

The paper by Alesia Zuccala, Frederik Verleysen, Roberto Cornacchia, and Tim Engels (University of Copenhagen, University of Antwerp and Spinque B.V.) analyses the usefulness of an original data source for informetric research, Goodreads (a social cataloguing platform on which readers can rate and recommend books), for measuring the wider impact of academic books. Drawing on books cited by 604 history journals from the Scopus database, the authors retrieved a list of more than 8,500 history books from the Goodreads platform, for which they compared the citation and reader rating counts. For the entire data set of books as well as the subset that received both a high number of citations and reader ratings, low correlations were found between citations and reader rating counts, which suggest that Goodreads ratings could be used as a complement to citations. Their results also shows that reader ratings were more likely to be given to books held in academic and public libraries outside the USA, which suggests a positive effect of the books' international visibility. Of course, as with any new data source, more research is needed to assess whether these findings can be obtained using other data sets covering different disciplines; however, their method provides a unique window on assessing the impact of research in a domain – history – that has remained for several decades one of the blind spots of bibliometrics and research evaluation.

Focussing on visualizing and interpreting social media activity, Victoria Uren and Aba-Sah Dadzie (Aston University and University of Birmingham), compare the Twitter activity of a trending – if not viral – topic (the Curiosity landing) with that of two non-trending topics (Phosphorus and Permafrost), to assess whether methods used for the first group of topics could be transferred to the second group of topics. Results show that the parallel coordinates visualization method, in combination with pattern matching, is an effective method for observing dynamic changes in Twitter activity, as it allows for the analysis of both midsize and large collections of microposts, and provides the scalability required for longitudinal studies. As a large proportion of the research on altmetrics has been performed by information scientists who have transposed methods and frameworks from the bibliometric paradigm to the analysis of altmetrics, this original methodological contribution provides researchers in the field with more advanced methods to study the diffusion of scientific information on the microblogging platform. The method differs from standard approaches to the analysis of microblog data, which tend to focus on machine-driven analysis of large-scale data sets. It provides evidence that this approach enables practical and effective analysis of the content of midsize to large collections of microposts.

The final paper in this issue also examines Twitter. Timothy D. Bowman (Université de Montréal and Indiana University Bloomington) provides the results of an analysis of tweeting behaviors by professors affiliated to universities of the Association of American Universities (AAU), based on both a survey of the faculty as well as an in-depth analysis of their tweets. A random sample of 75,000 of professors' tweets were classified as personal or professional based on the impression of users on Amazon Mechanical Turk, so-called “turkers”. Bowman's findings emphasize the differences in various disciplines' usage of Twitter, with half of the computer scientists surveyed having an account, compared to about one-fifth of chemists. Younger researchers were also more likely to have an account than older researchers. In all departments surveyed, respondents indicated using Twitter in both personal and professional contexts, or for professional reasons only, except in philosophy where most professors used it for strictly personal reasons. Differences were also found in the use of affordances such as #hashtags, @user mentions, URLs, and retweets across personal and professional tweets (as classified by turkers), which suggests that different social norms frame the various uses of the platform. Bowman also shows that tweeting activity (i.e. number of tweets per day) varied greatly across disciplines, with scholars from the social sciences tweeting more (1.40 per day) than scholars from the natural sciences (0.61 per day). As one of the largest analyses of the prevalence of Twitter use in academia and of the various usages and factors that influence its uses, this paper contributes to the development of a theoretical framework that allows for the interpretation of Twitter-based indicators of science.

5. Conclusion

The contributions in this Special Issue provide insights on social media activity related to scholars and scholarly content, as well as on the metrics that are based on these online events. After decades of studying scholarly communication almost exclusively with papers and citations, scholars now have access to new sources of evidence which, in turn, has brought new energy to the science indicators community.

Several parallels can be drawn between the current state of research on social media metrics and the early days of citation analysis. In a manner similar to the altmetrics research community, the bibliometric community has historically been driven by data availability rather than by crafting indicators based on specific concepts. In that sense, both communities (which overlap to a certain extent) have been quite pragmatic. However, while citations had been a central and established component of scholarly communication since the early days of modern science, the role and uses of various social media platforms within and outside academe are still taking shape (Haustein et al., in press). At the same time, funders, universities, and publishers increasingly demand indicators of the impact of science on society.

The comparison with bibliometrics can also provide us with lessons learned, as researchers are increasingly observing the adverse effects of the use of such indicators in research evaluation (Binswanger, 2015; Frey and Osterloh, 2006). Let us not condemn the burgeoning field of altmetrics to the same fate. As altmetrics hold the potential to make the evaluation of research activities more comprehensive, we need to focus our attention on understanding the meaning of these metrics. Hopefully, this Special Issue is a step in this direction.

Dr Stefanie Haustein, École de bibliothéconomie et des sciences de l'information (EBSI), Université de Montréal, Montréal, Canada

Dr Cassidy R. Sugimoto, School of Informatics and Computing, Indiana University, Bloomington, Indiana, USA, and

Dr Vincent Larivière, École de bibliothéconomie et des sciences de l'information (EBSI), Université de Montréal, Montréal, Canada and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, Montréal, Canada

Acknowledgement

The authors would like to thank all authors for submitting their manuscript and contributing to this Special Issue as well as the 37 reviewers for their valuable feedback. The authors also thank Sam Work for her help with the literature review and acknowledge funding from the Alfred P. Sloan Foundation Grant No. G-2014-3-25.

References

Allen, H.G., Stanton, T.R., Di Pietro, F. and Moseley, G.L. (2013), “Social media release increases dissemination of original articles in the clinical pain sciences”, PLoS ONE, Vol. 8 No. 7, p. e68914

Allen, L., Jones, C., Dolby, K., Lynn, D. and Walport, M. (2009), “Looking for landmarks: the role of expert review and bibliometric analysis in evaluating scientific publication outputs”, PLoS ONE, Vol. 4 No. 6, p. e5910

Almind, T.C. and Ingwersen, P. (1997), “Informetric analyses on the world wide web: methodological approaches to ‘webometrics'”, Journal of Documentation, Vol. 53 No. 4, pp. 404-426

Bar-Ilan, J. (2012), “JASIST 2001-2010”, Bulletin of the American Society for Information Science and Technology, Vol. 38 No. 6, pp. 24-28

Bawden, D. and Robinson, L. (2008), “The dark side of information: overload, anxiety and other paradoxes and pathologies”, Journal of Information Science, Vol. 35 No. 2, pp. 180-191

Binswanger, M. (2015), “How nonsense became excellence: forcing professors to publish”, in Welpe, I.M., Wollersheim, J., Ringelhan, S. and Osterloh, M. (Eds), Incentives and Performance, Springer International Publishing, Cham, pp. 19-32

Boateng, F. and Quan Liu, Y. (2014), “Web 2.0 applications' usage and trends in top US academic libraries”, Library Hi Tech, Vol. 32 No. 1, pp. 120-138

Bornmann, L. (2015), “Usefulness of altmetrics for measuring the broader impact of research: a case study using data from PLOS and F1000Prime”, Aslib Journal of Information Management, Vol. 67 No. 3, pp. 305-319

Bornmann, L. and Leydesdorff, L. (2013), “The validation of (advanced) bibliometric indicators through peer assessments: a comparative study using data from in cites and F1000”, Journal of Informetrics, Vol. 7 No. 2, pp. 286-291

Cahn, P.S., Benjamin, E.J. and Shanahan, C.W. (2013), “‘Uncrunching' time: medical schools' use of social media for faculty development”, Medical Education Online, Vol. 1, Article no. 20995

Costas, R., Zahedi, Z. and Wouters, P. (in press), “Do ‘altmetrics' correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective”, Journal of the American Society for Information Science, doi: 10.1002/asi.23309.

Cronin, B. (2005), The Hand of Science: Academic Writing and its Rewards, Scarecrow Press, Lanham, MD

Cronin, B. and Sugimoto, C.R. (Eds) (2014), Scholarly Metrics Under the Microscope: from Citation Analysis to Academic Auditing, Association for Information Science and Technology and Information Today Inc., Medford, NJ

Cronin, B. and Weaver, S. (1995), “The praxis of acknowledgement: from bibliometrics to influmetrics”, Revista Española de Documentación Científica, Vol. 18 No. 2, pp. 172-177

Cronin, B., Snyder, H.W., Rosenbaum, H., Martinson, A. and Callahan, E. (1998), “Invoked on the web”, Journal of the American Society for Information Science, Vol. 49 No. 14, pp. 1319-1328

Dinsmore, A., Allen, L. and Dolby, K. (2014), “Alternative perspectives on impact: the potential of ALMs and altmetrics to inform funders about research impact”, PLoS Biology, Vol. 12 No. 11, p. e1002003

Evans, P. and Krauthammer, M. (2011), “Exploring the use of social media to measure journal article impact”, AMIA Annual Symposium Proceedings, pp. 374-381

Eysenbach, G. (2011), “Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact”, Journal of Medical Internet Research, Vol. 13 No. 4, p. e123

Fausto, S., Machado, F.A., Bento, L.F.J., Iamarino, A. and Nahas, T.R. and Munger, D.S. (2012), “Research blogging: indexing and registering the change in science 2.0”, PLoS ONE, Vol. 7 No. 12, p. e50109

Fenner, M. (2013), “What can article-level metrics do for you?”, PLoS Biology, Vol. 11 No. 10, pp. e1001687-e1001687

Frey, B.S. and Osterloh, M. (2006), “Evaluations: hidden costs, questionable benefits, and superior alternatives”, Working Paper No. 302, University of Zurich, Zürich, available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=928354 (accessed March 24, 2015).

Garfield, E. (1955), “Citation indexes for science. A new dimension in documentation through association of ideas”, Science, Vol. 122 No. 3159, pp. 108-111

Garfield, E. (1964), “‘Science citation index' – a new dimension in indexing”, Science, Vol. 144 No. 3619, pp. 649-654

Garfield, E. (1983), Citation Indexing. Its Theory and Application in Science, Technology and Humanities, ISI Press, Philadelphia, PA

Grande, D., Gollust, S.E., Pany, M., Seymour, J., Goss, A., Kilaru, A. and Meisel, Z. (2014), “Translating research for health policy: researchers' perceptions and use of social media”, Health Affairs, Vol. 33 No. 7, pp. 1278-1285

Groth, P. and Gurney, T. (2010), “Studying scientific discourse on the web using bibliometrics: a chemistry blogging case study”, Proceedings of the WebSci10: Extending the Frontiers of Society On-Line, Raleigh, NC, available at: http://journal.webscience.org/308/2/websci10_submission_48.pdf (accessed March 25, 2015).

Gruber, T. (2014), “Academic sell-out: how an obsession with metrics and rankings is damaging academia”, Journal of Marketing for Higher Education, Vol. 24 No. 2, pp. 165-177

Gunn, W. (2014), “On numbers and freedom”, El Profesional de la Informacion, Vol. 23 No. 5, pp. 463-466

Harley, D., Acord, S.K., Earl-Novell, S., Lawrence, S. and King, C.J. (2010), Assessing the Future Landscape of Scholarly Communication: an Exploration of Faculty Values and Needs in Seven Disciplines., the Center for Studies in Higher Education, University of California Press, Berkeley, CA, available at: http://escholarship.org/uc/item/15%C3%977385g (accessed March 25, 2015).

Haustein, S. (2014), “Readership metrics”, in Cronin, B. and Sugimoto, C.R. (Eds), Beyond Bibliometrics: Harnessing Multidimensional Indicators of Performance, MIT Press, Cambridge, MA, pp. 327-344

Haustein, S. and Larivière, V. (2015), “The use of bibliometrics for assessing research: possibilities, limitations and adverse effects”, in Welpe, I.M., Wollersheim, J., Ringelhahn, S. and Osterloh, M. (Eds), Incentives and Performance. Governance of Research Organizations, Springer, pp. 121-139

Haustein, S. and Siebenlist, T. (2011), “Applying social bookmarking data to evaluate journal usage”, Journal of Informetrics, Vol. 5 No. 3, pp. 446-457

Haustein, S., Bowman, T.D. and Costas, R. (in press), “Interpreting ‘altmetrics': viewing acts on social media through the lens of citation and social theories”, in Sugimoto, C.R. (Ed.), Theories of Informetrics: A Festschrift in Honor of Blaise Cronin, available at: http://arxiv.org/abs/1502.05701 (accessed March 18, 2015).

Haustein, S., Costas, R. and Larivière, V. (2015), “Characterizing social media metrics of scholarly papers: the effect of document properties and collaboration patterns”, PLoS ONE, Vol. 10 No. 3, p. e0120495

Haustein, S., Bowman, T.D., Holmberg, K., Peters, I. and Larivière, V. (2014f), “Astrophysicists on twitter: an in-depth analysis of tweeting and scientific publication behaviour”, Aslib Proceedings, Vol. 66 No. 3, pp. 279-296

Haustein, S., Bowman, T.D., Macaluso, B., Sugimoto, C.R. and Larivière, V. (2014e), “Measuring twitter activity of arxiv e-prints and published papers”, paper presented at Altmetrics14: Expanding Impacts and Metrics, ACM Web Science Conference, Bloomington, IN, doi: 10.6084/m9.figshare.1041514.

Haustein, S., Larivière, V., Thelwall, M., Amyot, D. and Peters, I. (2014b), “Tweets vs. mendeley readers: how do these two social media metrics differ?”, IT – Information Technology, Vol. 56 No. 5, pp. 207-215

Haustein, S., Peters, I., Sugimoto, C.R., Thelwall, M. and Larivière, V. (2014d), “Tweeting biomedicine: an analysis of tweets and citations in the biomedical literature: tweeting biomedicine: an analysis of tweets and citations in the biomedical literature”, Journal of the Association for Information Science and Technology, Vol. 65 No. 4, pp. 656-669

Haustein, S., Bowman, T.D., Holmberg, K., Tsou, A., Sugimoto, C.R. and Larivière, V. (2014a), “Tweets as impact indicators: examining the implications of automated bot accounts on Twitter”, Journal of the Association for Information Science and Technology, available at: http://arxiv.org/abs/1410.4139 (accessed March 25, 2015).

Haustein, S., Peters, I., Bar-Ilan, J., Priem, J., Shema, H. and Terliesner, J. (2014c), “Coverage and adoption of altmetrics sources in the bibliometric community”, Scientometrics, Vol. 101 No. 2, pp. 1145-1163

Kalashyan, I., Kaneva, D., Lee, S., Knapp, D., Roushan, G. and Bobeva, M. (2013), “Paradigm shift-engaging academics in social media-the case of bournemouth university”, Proceedings of the European Conference on e-Learning, 2013, Presented at the 12th European Conference on e-Learning, Sophia Antipolis, pp. 662-665

Kortelainen, T. and Katvala, M. (2012), “‘Everything is plentiful – except attention'. Attention data of scientific journals on social web tools”, Journal of Informetrics, Vol. 6 No. 4, pp. 661-668

Kovic, I., Lulic, I. and Brumini, G. (2008), “Examining the medical blogosphere: an online survey of medical bloggers”, Journal of Medical Internet Research, Vol. 10 No. 3, pp. e28-e28

Li, X. and Thelwall, M. (2012), “F1000, Mendeley and traditional bibliometric indicators”, in in Proceedings of the 17th International Conference on Science and Technology Indicators,, Repro-UQAM, Montreal, Canada, pp. 541-551, available at: http://2012.sticonference.org/Proceedings/vol2/Li_F1000_541.pdf (accessed March 8, 2015)

Li, X., Thelwall, M. and Giustini, D. (2012), “Validating online reference managers for scholarly impact measurement”, Scientometrics, Vol. 91 No. 2, pp. 461-471

Lin, J. and Fenner, M. (2013), “Altmetrics in evolution: defining and redefining the ontology of article-level metrics”, Information Standards Quarterly, Vol. 25 No. 2, pp. 20-26

Liu, J. (2014), “New source alert: Policy documents”, Altmetric blog, available at: www.altmetric.com/blog/new-source-alert-policy-documents/ (accessed March 19, 2015).

Mas-Bleda, A., Thelwall, M., Kousha, K. and Aguillo, I.F. (2014), “Do highly cited researchers successfully use the social web?”, Scientometrics, Vol. 101 No. 1, pp. 337-356

Merton, R.K. (1957), Social Theory and Social Structure, Free Press, New York, NY

Merton, R.K. and Garfield, E. (1986), “Foreword”, in Price, D.J.d.S. (Ed.), Little Science, Big Science […] and Beyond, Columbia University Press, New York, NY, pp. vii-xiv

Moed, H.F. (2006), Citation Analysis in Research Evaluation, Springer, Dordrecht

Mohammadi, E. and Thelwall, M. (2013), “Assessing non-standard article impact using F1000 labels”, Scientometrics, Vol. 97 No. 2, pp. 383-395

Mohammadi, E. and Thelwall, M. (2014), “Mendeley readership altmetrics for the social sciences and humanities: research evaluation and knowledge flows”, Journal of the Association for Information Science and Technology, Vol. 65 No. 8, pp. 1627-1638

Mohammadi, E., Thelwall, M. and Kousha, K. (in press-a), “Can mendeley bookmarks reflect readership? A survey of user motivations”, Journal of the Association for Information Science and Technology, available at: www.scit.wlv.ac.uk/~cm1993/papers/CanMendeleyBookmarksReflectReadershipSurvey_preprint.pdf (accessed January 12, 2015).

Mohammadi, E., Thelwall, M., Haustein, S. and Larivière, V. (in press-b), “Who reads research articles? An altmetrics analysis of mendeley user categories”, Journal of the Association for Information Science and Technology, available at: www.ost.uqam.ca/Portals/0/docs/articles/2014/JASIST_mohammadietal.pdf (accessed March 18, 2015).

Mou, Y. (2014), “Presenting professorship on social media: from content and strategy to evaluation”, Chinese Journal of Communication, Vol. 7 No. 4, pp. 389-408

Nature Materials Editors (2012), “Alternative metrics”, Nature Materials, Vol. 11 No. 11, p. 907

Neylon, C. (2014), “Altmetrics: what are they good for?”, PLOS Open, available at: http://blogs.plos.org/opens/2014/10/03/altmetrics-what-are-they-good-for/ (accessed January 10, 2015).

Nicholas, D., Watkinson, A., Volentine, R., Allard, S., Levine, K., Tenopir, C. and Herman, E. (2014), “Trust and authority in scholarly communications in the light of the digital transition: setting the scene for a major study”, Learned Publishing, Vol. 27 No. 2, pp. 121-134

Nielsen, F.A. (2007), “Scientific citations in wikipedia”, First Monday, Vol. 12 No. 8, doi: 10.5210/fm.v12i8.1997.

Parkhill, M. (2013), “Plum analytics and OCLC partner to utilize world cat metrics for library holdings”, Plum analytics Blog, available at: http://blog.plumanalytics.com/post/61690801972/plum-analytics-and-oclc-partner-to-utilize (accessed March 18, 2015).

Piwowar, H. (2012), “A new framework for altmetrics”, Impactstory Blog, available at: http://blog.impactstory.org/31524247207/ (accessed March 18, 2015).

Piwowar, H. (2013), “Value all research products”, Nature, Vol. 493 No. 7431, p. 159

Price, D.J.D.S. (1963), Little Science, Big Science, Columbia University Press, New York, NY

Priem, J. (2010), “I like the term #articlelevelmetrics, but it fails to imply *diversity* of measures. Lately, I'm liking #altmetrics”, Tweet, available at: https://twitter.com/jasonpriem/status/25844968813 (accessed March 23, 2015).

Priem, J. (2014), “Altmetrics”, in Cronin, B. and Sugimoto, C.R. (Eds), Beyond Bibliometrics: Harnessing Multidimensional Indicators of Performance, MIT Press, Cambridge, MA, pp. 263-287

Priem, J., Piwowar, H.A. and Hemminger, B.M. (2012), “Altmetrics in the wild: using social media to explore scholarly impact”, arXiv print, available at: http://arxiv.org/abs/1203.4745 (accessed March 25, 2015).

Priem, J., Taraborelli, D., Groth, P. and Neylon, C. (2010), “Altmetrics: a manifesto”,Altmetrics.org, Alternative metrics tool, available at: http://altmetrics.org/manifesto/ (accessed March 25, 2015).

Procter, R., Williams, R., Stewart, J., Poschen, M., Snee, H., Voss, A. and Asgari-Targhi, M. (2010), “Adoption and use of Web 2.0 in scholarly communications. Philosophical transactions”, Series A, Mathematical, Physical, and Engineering Sciences, Vol. 368 No. 1926, pp. 4039-4056

Pscheida, D., Albrecht, S., Herbst, S., Minet, C. and Köhler, T. (2013), “Nutzung von Social Media und onlinebasierten Anwendungen in der Wissenschaft, ZBW–Deutsche Zentralbibliothek für Wirtschaftswissenschaften–Leibniz-Informationszentrum Wirtschaft”, available at: www.qucosa.de/fileadmin/data/qucosa/documents/13296/Science20_Datenreport_2013_PDF_A.pdf (accessed March 18, 2015).

Rousseau, R. and Ye, F.Y. (2013), “A multi-metric approach for research evaluation”, Chinese Science Bulletin, Vol. 58 No. 26, pp. 3288-3290

Rowlands, I., Nicholas, D., Russell, B., Canty, N. and Watkinson, A. (2011), “Social media use in the research workflow”, Learned Publishing, Vol. 24 No. 3, pp. 183-195

Shema, H. and Bar-Ilan, J. (2014), “Do blog citations correlate with a higher number of future citations? Research blogs as a potential source for alternative metrics”, Journal of the Association for Information Science and Technology, Vol. 65 No. 5, pp. 1018-1027

Shema, H., Bar-Ilan, J. and Thelwall, M. (2012), “Research blogs and the discussion of scholarly information”, PLoS ONE, Vol. 7 No. 5, p. e35869

Shuai, X., Pepe, A. and Bollen. J. (2012), “How the scientific community reacts to newly submitted preprints: article downloads, twitter mentions, and citations”, PLoS ONE, Vol. 7 No. 11, p. e47523

Stewart, J., Procter, R., Williams, R. and Poschen, M. (2013), “The role of academic publishers in shaping the development of Web 2.0 services for scholarly communication”, New Media & Society, Vol. 15 No. 3, pp. 413-432

Sud, P. and Thelwall, M. (2013), “Evaluating altmetrics”, Scientometrics, Vol. 98 No. 2, pp. 1131-1143

Sud, P. and Thelwall, M. (in press), “Not all international collaboration is beneficial: the mendeley readership and citation impact of biochemical research team size”, Journal of the Association for Information Science and Technology, available at: www.scit.wlv.ac.uk/~cm1993/papers/InternationalCollaborationBioChemistryPreprint.pdf (accessed March 25, 2015).

Sugimoto, C.R., Russell, T.G., Meho, L.I. and Marchionini, G. (2008), “MPACT and citation impact: two sides of the same scholarly coin?”, Library & Information Science Research, Vol. 30 No. 4, pp. 273-281

Tenopir, C., Volentine, R. and King, D.W. (2013), “Social media and scholarly reading”, Online Information Review, Vol. 37 No. 2, pp. 193-216

Thelwall, M., Vaughan, L. and Björneborn, L. (2005), “Webometrics”, Annual Review of Information Science and Technology, Vol. 39 No. 1, pp. 81-135

Thelwall, M., Haustein, S., Larivière, V. and Sugimoto, C.R. (2013), “Do altmetrics work? Twitter and ten other social web services”, PLoS ONE, Vol. 8 No. 5, p. e64841

Torres-Salinas, D., Cabezas-Clavijo, Á. and Jiménez-Contreras, E. (2013), “Altmetrics: new indicators for scientific communication in Web 2.0”, Comunicar, Vol. 21 No. 41, pp. 53-60

Tsou, A., Bowman, T.D., Ghazinejad, A. and Sugimoto, C.R. (in press), “Who tweets about science?”, Proceedings of the 2015 International Society for Scientometrics and Informetrics, Istanbul

Waltman, L. and Costas, R. (2014), “F1000 recommendations as a potential new data source for research evaluation: a comparison with citations”, Journal of the Association for Information Science and Technology, Vol. 65 No. 3, pp. 433-445

Weingart, P. (2005), “Impact of bibliometrics upon the science system: inadvertent consequences?”, Scientometrics, Vol. 62 No. 1, pp. 117-131

Wouters, P. and Costas, R. (2012), “Users, narcissism and control – tracking the impact of scholarly publications in the 21 st century”, SURF Foundation, Rochester, NY, available at: www.surf.nl/binaries/content/assets/surf/en/knowledgebase/2011/Users+narcissism+and+control.pdf (accessed March 25, 2015).

Zahedi, Z., Costas, R. and Wouters, P. (2013), “What is the impact of the publications read by the different mendeley users? Could they help to identify alternative types of impact?”, presented at the PLoS ALM Workshop, San Francisco, CA, available at: http://article-level-metrics.plos.org/alm-workshop-2013/ (accessed March 24, 2015).

Zuccala, A., Verleysen, F., Cornacchia, R. and Engels, T. (2014), “The societal impact of history books: citations, reader ratings, and the ‘altmetric' value of goodreads”, 19th Nordic Workshop on Bibliometrics and Research Policy, Reykjavik, September 25-26, available at: www.rannis.is/media/erindi-glaerur/5-C-Zuccala-NordicWorkshop.pdf (accessed March 25, 2015).

Zuccala, A., Verleysen, F., Cornacchia, R. and Engels, T. (2015), “Altmetrics for the humanities: Comparing Goodreads reader ratings with citations to history books”, Aslib Journal of Information Management, Vol. 67 No. 3, pp. 320-336

Related articles