Open-access mega-journals: The future of scholarly communication or academic dumping ground? A review

Valerie Spezi (Loughborough University, Loughborough, UK)
Simon Wakeling (University of Sheffield, Sheffield, UK)
Stephen Pinfield (University of Sheffield, Sheffield, UK)
Claire Creaser (Loughborough University, Loughborough, UK)
Jenny Fry (Loughborough University, Loughborough, UK)
Peter Willett (University of Sheffield, Sheffield, UK)

Journal of Documentation

ISSN: 0022-0418

Article publication date: 13 March 2017

15202

Abstract

Purpose

Open-access mega-journals (OAMJs) represent an increasingly important part of the scholarly communication landscape. OAMJs, such as PLOS ONE, are large scale, broad scope journals that operate an open access business model (normally based on article-processing charges), and which employ a novel form of peer review, focussing on scientific “soundness” and eschewing judgement of novelty or importance. The purpose of this paper is to examine the discourses relating to OAMJs, and their place within scholarly publishing, and considers attitudes towards mega-journals within the academic community.

Design/methodology/approach

This paper presents a review of the literature of OAMJs structured around four defining characteristics: scale, disciplinary scope, peer review policy, and economic model. The existing scholarly literature was augmented by searches of more informal outputs, such as blogs and e-mail discussion lists, to capture the debate in its entirety.

Findings

While the academic literature relating specifically to OAMJs is relatively sparse, discussion in other fora is detailed and animated, with debates ranging from the sustainability and ethics of the mega-journal model, to the impact of soundness-only peer review on article quality and discoverability, and the potential for OAMJs to represent a paradigm-shifting development in scholarly publishing.

Originality/value

This paper represents the first comprehensive review of the mega-journal phenomenon, drawing not only on the published academic literature, but also grey, professional and informal sources. The paper advances a number of ways in which the role of OAMJs in the scholarly communication environment can be conceptualised.

Keywords

Citation

Spezi, V., Wakeling, S., Pinfield, S., Creaser, C., Fry, J. and Willett, P. (2017), "Open-access mega-journals: The future of scholarly communication or academic dumping ground? A review", Journal of Documentation, Vol. 73 No. 2, pp. 263-283. https://doi.org/10.1108/JD-06-2016-0082

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Valerie Spezi, Simon Wakeling, Stephen Pinfield, Claire Creaser, Jenny Fry, Peter Willett

License

This paper is published under the Creative Commons Attribution (CC BY 3.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at: http://creativecommons.org/licences/by/3.0/legalcode


1. Introduction

Open-access mega-journals (OAMJs) represent an increasingly important part of the scholarly communication environment. The first journal of this type (still seen as an exemplar), PLOS ONE, was launched in 2006, and was the largest peer-reviewed journal globally in 2015, publishing just over 27,400 articles[1] across a wide range of science, technology and medicine (STM) disciplines. Since it gained a prominent place in the scholarly communication landscape, many established publishers have launched “PLOS ONE-like” titles, including Nature (Scientific Reports), which has an equally wide subject scope, the American Institute of Physics (AIP Advances), covering all of physics, and BMJ (BMJ Open), covering all medicine. They have since been joined by new entrants into the publishing market place, such as F1000 and PeerJ, both covering all life science disciplines.

In the last five years in particular, OAMJs have given rise to debate and controversy. Some commentators have provided a positive view of mega-journals and their place in the scholarly communication system. Esposito, in an intervention in the social media debate, proposed that “PLOS ONE points to the future of academic publishing” (comment on Anderson, 2010). Wellen (2013), in an early study of the political economy of mega-journals, identified them as having at least some of the characteristics of a disruptive innovation, with the potential to contribute to major changes in the scholarly communication environment. Guédon (2015), also contributing to an online discussion on the future of scholarly communication, stated “subsidized mega-journals would be the best system”. In contrast, mega-journals have also generated scepticism. Some have seen them as a “dumping ground” for lower quality outputs which reduce the valuable filtering of content provided by conventional journals (Hawley, quoted by Butler, 2008; Davis, 2014). As a result, fears have been expressed that publishing in mega-journals is “career suicide”, particularly for early career researchers (Tredennick, 2013). Others have suggested that mega-journals are a “cash cow” for open access publishers sustained through “bulk publishing” (Butler, 2008).

This review paper brings together these and other views of mega-journals in order to understand better the phenomenon and its implications for scholarly communication. This is the first comprehensive review of the mega-journal phenomenon. It reports major studies of mega-journals and discusses key issues to which they give rise. The paper begins with a discussion on the definition of “mega-journals”, followed by an analysis of various perspectives on the defining characteristics of OAMJs. It goes on to discuss attitudes to OAMJs in the academic community. The key issues are then brought together in a Discussion section which comments on the place of mega-journals in the current scholarly communication landscape, and their potential role in the future.

2. Method

A number of sources were used to identify material relevant for this review. To locate peer-reviewed articles, searches of the Scopus and Web of Science databases, as well as Google Scholar, were conducted using variations of the term, “mega-journal” (carried out in May 2016). In order to capture articles focussing on individual mega-journals, additional searches were conducted with the titles of prominent OAMJs as the search terms. All results were reviewed, and a total of seven peer-reviewed articles were found to focus on mega-journals. These are presented in Table I, along with a brief summary of the aspect of OAMJs discussed in each article. Other non-peer-reviewed articles, such as editorials, in journals covered by Scopus and Web of Science were also consulted (e.g. Buriak, 2015; Busby, 2015; MacCallum, 2011). Furthermore, relevant additional items from the peer-reviewed and professional literature were consulted based on references from the core articles and other bibliographic searches relating to themes identified in the research.

It was also recognised that many issues relating to mega-journals have been debated in less formal scholarly communication fora (including blogs, news items and mailing lists) with an immediacy and openness not seen in formal publications. Along with searches of the Scholarly Kitchen blog, and the blogs and other social media of well-known open access (OA) commentators, the open access tracking project (OATP) website was used to identify relevant material. In particular, OATP items tagged with the “OA.megajournal” tag were taken into account. Along with general web searches, OATP also proved useful for identifying professional and grey literature relating to OAMJs.

Analysis of these various sources was undertaken to identify the major issues discussed. The authors carried out a thorough reading of all of the sources. From this, a detailed commentary on the sources was produced and a thematic map was developed. These were then discussed and refined by the authorial team. Major themes identified corresponded closely to key defining characteristics of OAMJs (discussed below), plus an additional thematic area of attitudes to and debates around mega-journals within the academic community. These were used as the basis for structuring this review which reports the analysis undertaken and also goes on to highlight major issues to which the sources give rise and the relationships between them. Various potential conceptualisations of the relationship between the key issues of journal business models and quality control were also developed and are outlined in the Discussion section.

3. Defining “mega-journal”

It seems that the term “mega-journal” was first used specifically in relation to journal size. The taxonomy journal, Zootaxa, defined itself as a “mega-journal” as early as 2006, a label which it saw as applying to any journal with an output “a magnitude larger than an average journal in a particular field” (Zhang, 2006, p. 68). While some recent references to mega-journals also adopt a primarily size-based definition (Busby, 2015; Xia, 2014), the term has been used more widely, particularly since its association with PLOS ONE and subsequently with “PLOS ONE-like” journals (Binfield, 2013). PLOS ONE certainly meets this criterion of scale, with an output level which rose rapidly following its launch in 2006, reaching 1,239 articles in 2007, 13,703 in 2011 and 31,404 in 2013 (figures based on articles indexed in the Scopus database). Despite a decline in the number of published papers in 2014 (30,309) and 2015 (27,488), it was still the largest academic journal globally in mid-2016.

One key reason for such large numbers is PLOS ONE’s broad subject scope: it accepts articles from all STM disciplines and in fact some social sciences. Articles are not, therefore, assessed on their potential “interest” to a narrow subject community. Nor are they assessed by peer reviewers based on criteria of “novelty” or “importance”, judgements which are traditionally central to the acceptance criteria of the overwhelming majority of established peer-reviewed titles. Rather, PLOS ONE assesses articles based on the primary criterion of “scientific soundness”, with all articles which meet this quality threshold (and within the very broad subject coverage) accepted for publication. The PLOS ONE policy states:

PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership, who are the most qualified to determine what is of interest to them

(PLOS ONE, n.d.).

Issues of “interest” or “importance” of papers are informed to a certain extent by the use of post-publication article-level metrics, particularly so-called “altmetrics” (Sud and Thelwall, 2014; Zahedi et al., 2014), which are given some prominence by the journal.

The term “mega-journal”, partly because of its association with PLOS ONE and other similar journals, has thus become linked not merely with large scale publishing output, but also breadth of scope and peer-review criteria based specifically on “soundness”, as well as the basic criterion of OA, normally based on a business model of pre-publication article-processing charges (APCs). These elements can be found in definitions in the literature (Binfield, 2013; Björk, 2015; Norman, 2012; Ware and Mabe, 2015), of which Björk’s (2015) criteria are the most formalised (Table II). He identifies a set of “primary criteria” (associated with large scale, “soundness”-based peer review, broad scope and APC-based OA), and a set of “secondary criteria” (including use of altmetrics and rapid publication). In Björk’s analysis, to qualify as a mega-journal, a title should comply with all of the primary and most of the secondary criteria.

Estimates of the number of mega-journals currently operating range from 14 (Björk, 2015) to 35 (Ware and Mabe, 2015). Binfield (2013) lists 28, while Solomon refers to “over 20” (2014, p. 2). Björk’s lower estimate is certainly due to his more restrictive criteria. Table AI lists all titles that are identified as OAMJs in at least one of six sources (Binfield, 2013; Björk, 2015; megajournals.info, 2016; Norman, 2012; Ware and Mabe, 2015; Wikipedia, 2016). Only four titles (Biology Open, BMJ Open, PLOS ONE and Scientific Reports) are common to all six lists, although some titles (notably SpringerPlus, IEEE Access and PeerJ) post-date Norman’s analyses, and are listed in all other sources. These differences may be due to the subjective elements in all of the definitions of “mega-journal”. In Björk’s (2015) criteria, for example, it is unclear precisely what constitutes a “moderate APC” or a “high-prestige publisher”. “Big publishing volume or aiming for it” is itself vague. There also appears to be some ambiguity regarding journals’ peer-review policy. G3, for example, specifies that it publishes only articles “whose availability would be valuable for genetics and genomics investigators” (Genetics Society of America, n.d.). While Bjork clearly considers this criterion to be within the bounds of “scientific soundness”, other authors apparently do not. Despite this, Björk’s remains a very useful working definition.

There does, however, continue to be debate about the term “mega-journal”. One recent blog, commenting on Björk’s analysis, stated:

I am using the term “megajournal” here to mean “journal that practices PLOS ONE-style peer-review for correctness only, ignoring guesses at possible impact”. It’s not a great term for this class of journals, but it seems to be becoming established as the default

(SV-Pow, 2015).

The fact that the term is now widely used (described as the “default”), and that the concept has become widely accepted (a recognised “class of journals”, even allowing for fluid definitional boundaries), indicates that mega-journals merit further study. In the discussion which follows, the key mega-journal characteristics (large volume, broad scope, soundness-based peer review and APC-based OA), Björk’s primary criteria, are used as a framework for the analysis, although the discussion takes in a larger group of titles than included within Björk’s definition. This is augmented by a discussion of responses to and debates around mega-journals in the academic community.

4. Publication volume

In 2015, the combined outputs of 11 prominent mega-journals (those that meet a strict interpretation of Björk’s criteria and are indexed in Scopus) totalled 44,820 articles (Figure 1). PLOS ONE clearly dominates this figure. In 2012, Binfield (2012) estimated that PLOS ONE, with its 23,500 articles, accounted for nearly 2.4 per cent of the PubMed output in that year. Other sources cited in Fein (2013) claim that 1 in 60 articles in PubMed for the year 2011 were from PLOS ONE. However, 2013 was a high point in PLOS ONE output (31,404 articles), with 2014 (30,309) and 2015 (27,488) both seeing a decline in the number of articles published. Nature’s Scientific Reports over the same period grew from 2,494 in 2013 to 10,600 in 2015. Data from the first half of 2016 indicate that these trends are continuing, with the prospect of Scientific Reports overtaking PLOS ONE’s monthly outputs by the end of the year a realistic possibility. The success of Scientific Reports is likely to have been a contributory factor influencing the decision to close SpringerPlus, following the Springer-Nature merger (Epstein, 2016).

Other titles also grew during the period, but with much smaller numbers of articles, with outputs of between 500 and 1,000 articles a year. Bearing in mind that publication volumes are strongly discipline-dependent, this places them at the same level as some large conventional STM journals. While these figures put the OAMJs in the largest 1 per cent of all journals, the lack of a clearly agreed definition means it remains open to debate whether they are large enough to justify the “mega” prefix. Ware and Mabe (2015) identified mega-journals as the fastest growing segment of the Gold OA market, a market which is now responsible for 11-12 per cent of the total number of STM articles (approximately two million articles per year). Articles in mega-journals themselves constituted about 2 per cent of the overall STM output.

Scale (or anticipation of it) has informed discussion of the design of a number of mega-journals. Mega-journals are (or have the potential to be) “giant content generators” (MacCallum, 2011) or midway between journals and archives (Wellen, 2013). This has implications in terms of organisation and discovery of information content, beyond the subject categories navigation generally offered. MacCallum (2011) argues that mega-journals need to rethink the organisation and structure of their content carefully, so that they can cater for the different needs of their diverse readership, including communities of scientists, policy makers, journalists, as well as a more general audience. Catering for those extremely varied information needs may involve, according to MacCallum (2011), the creation of sophisticated client interfaces that can be personalised, although sustainable services of this sort are, for the most part, yet to emerge.

Managing a mega-journal of the scale of PLOS ONE also involves new challenges including the coordination and support of large numbers of academic editors, editorial board members and peer reviewers, all coming from a wide range of disciplines with differing disciplinary cultures and practices (MacCallum, 2011; Wiser, 2015). Anderson (2014) has commented on the potential problems of matching articles with appropriate academic editors and reviewers. Many mega-journals have reportedly designed platforms and processes to support large scale activity. These include reliance on automated workflows. For example, Heliyon (published by Elsevier) uses automated systems wherever possible, as well as the use of Scopus data, to assign articles directly to editors and reviewers (Grimme, 2015). Hindawi, with its International Scholarly Research Notices series, is also said to use a fully automated assignment system for reviewers (Björk and Hedlund, 2015). Scale has, however, created problems. As it has grown, PLOS ONE has found it difficult to keep pace, with its earlier publication speed slowing down considerably. Powell (2016) reports review time rising from 37 to 125 days over the journal’s lifespan, a level comparable with many conventional journals. Since PLOS ONE originally used the strapline “accelerating science”, and rapid publication is regarded as a key incentive for publishing in an OAMJ, the increasing length of the publication process is a potential problem for mega-journals.

5. Disciplinary scope

Large publication volumes are associated with broad disciplinary scope, the second of the Björk criteria. Broad scope is a major departure from scholarly publishing trends of the last 50 years, which are characterised by ever increasing specialisation (Wellen, 2013). Many mega-journals cover an entire discipline or a wide range of disciplines, as already described. Heliyon, launched in 2015, covers all academic disciplines (Grimme, 2015).

Notwithstanding their broad scope, biomedical disciplines have dominated mega-journals in terms of the number of articles published (Björk, 2015). In her 2011 review of PLOS ONE, PLOS’s MacCallum (2011) stated that PLOS ONE had “a current focus on the life and medical sciences” (p. 1). Since PLOS ONE has always accepted submissions across the whole of STM, it is not clear whether this is a policy-driven focus or simply a de facto one. The apparent wider acceptance of Gold OA publishing by the biomedical disciplines may be relevant here (Gargouri et al., 2012), as well as what might be seen as greater coherence across a range of cognate biomedical disciplines. This may make researchers in these subject areas more inclined to report their findings in a journal with very broad coverage, especially in comparison to those working in the physical sciences. There is evidence, however, that other disciplines, particularly the physical sciences are now adopting mega-journals (Wakeling et al., 2016). PLOS ONE articles are predominantly in the biomedical disciplines, whereas Scientific Reports publishes a much higher proportion of articles in the physical sciences.

Although mega-journals are predominantly in the STM fields, there are now a small number of OAMJ initiatives in the humanities and social sciences (HSS), notably SAGE Open, launched in 2011, and the Open Library of the Humanities, launched in 2014. The latter announced itself specifically as “a project exploring a PLOS-style model for the humanities and social sciences” (OLH, 2013), although it has evolved to become a platform for a series of specialist journals (ten in mid-2016) as well as a mega-journal title. Both OLH and SAGE Open have retained an HSS focus, whereas some mega-journals have attempted to cover all disciplines (HSS and STM). Such an approach is controversial, however, particularly since Springer-Nature’s decision to close SpringerPlus citing such a wide breadth as being unsuccessful, with HSS scholars (and also those in “technology, engineering and mathematics”) being less willing to contribute to a journal of such broad scope (Epstein, 2016).

One possible alternative response to this issue is the development of series titles, a development that has been regarded as closely allied to the mega-journal phenomenon, since closely coupled series might be observed to have “mega-journal-like” characteristics (Norman, 2012). Series, such as the BMC Series (60 subject-specific OA journals in the areas of biology and medicine, BioMed Central, 2016) or Frontiers in […] Series (54 OA journals covering 426 specialities in the physical and natural sciences and medicine, Frontiers, n.d.), might, taken as a whole, be viewed as a broad disciplinary scope journal. This is particularly the case when series titles seem to be marketed and managed as a coherent set rather than as separate titles.

One important feature that these variable approaches to journal publication have in common is the apparent shift from the journal to the article as the key unit. Housed in a large repository of content, the article rather than the venue that houses it arguably becomes more prominent. The emphasis placed on article-level metrics by mega-journal titles may be seen as related to this. It might also be seen as part of a wider trend in scholarly communication of the diminishing importance of the journal even in conventional publishing.

Large scale and broad scope do, however, create major challenges. One prominent feature of the informal literature is the issue of the discoverability of papers in mega-journals. For example, Anderson (2014) has questioned whether current technology and working practices make it feasible for users to find relevant material in very large collections like PLOS ONE. He suggested that the traditional journal model allows users to browse small, highly focussed collections of articles, something not possible in mega-journals. In contrast, others have suggested that both technologies and working practices are changing – rather than reading a particular journal issue from cover to cover, most authors now use available article alerting services or search for content via search engines and aggregator sites (Housewright et al., 2013). These developments, as well as those described by MacCallum (2011) for individual mega-journals, are likely at least to mitigate problems of discoverability as technology develops and working practices evolve.

6. Peer review

Perhaps the most controversial feature of mega-journals is their approach to quality control, particularly peer review based on “soundness” (Björk and Catani, 2016). Since peer review is often seen as foundational to scholarly communication, any changes to it are often regarded with suspicion (Pinfield, 2015). Soundness-based peer review has certainly been criticised as representing a decline in quality standards, labelled, for example, as “light peer review” (Butler, 2008; Davis, 2008; Buriak, 2015).

Advocates of OAMJs suggest that the conventional peer review system is actually at least partly derived from print-based page constraints – the need to select a finite number of articles covering a particular number of pages that can be printed (Burns, 2016). Online journals do not have the same constraints and therefore, it is argued, do not need to follow the same processes. In particular, however, it is argued that peer review focussed on “soundness” avoids the subjectivity associated with peer reviewers judging the novelty of a piece of work, its potential importance to a field, or its interest to a given subject community. All of these, it is suggested, involve highly subjective judgements compared with the more “objective” assessment of scientific soundness, hence the label of “objective peer review” sometimes given to this approach by its supporters (see comments on Anderson, 2010). Accompanying this argument regarding pre-publication, soundness-based quality assessment is the view that novelty, importance and interest can, in fact, be better assessed following publication by measuring the reception and use of a paper (Björk and Hedlund, 2015). This has been part of the motivation for mega-journals giving article-level metrics (including downloads, citations, bookmarks and social media comment) greater prominence than has often been the case in conventional journals.

It seems natural to assume that a consequence of reviewing on the basis of “soundness” alone would be that OAMJs report higher acceptance rates than traditional journals, since articles are not being rejected on the basis of their perceived importance. PLOS ONE reports an acceptance rate in the region of 69 per cent, with 60 per cent for BMJ Open, 55 per cent for Scientific Reports, 68 per cent for FEBS Bio Open and 51 per cent for Biology Open (Björk, 2015). Putting these figures in context is problematic, since a range of factors have been shown to influence acceptance rates, most notably discipline, but also country of origin of the editor, and number of reviewers (Sugimoto et al., 2013). Sugimoto et al.’s study of non-OA journals in five different subject areas found acceptance rates to be 30-40 per cent for most disciplines, the exception being health journals, which they found accepted 46 per cent of submissions on average. Acceptance rates for OA journals were higher for all five disciplines, ranging from 38 per cent in business to 58 per cent in health. These results suggest that while OAMJ acceptance rates are generally higher than more selective journals, OA journals in some fields show broadly similar acceptance rates to some mega-journals.

Higher acceptance rates often translate to mega-journals being seen as places of “last resort” (Ware and Mabe, 2015), with articles rejected by more selective conventional journals being submitted to them. Since it is well established that authors commonly resubmit rejected articles to increasingly less prestigious journals (Cronin, 2012), OAMJs can perhaps be seen as an attractive venue for articles that have failed to find a home in more specialised publications. Solomon (2014) found that just over 52 per cent of the submissions at four mega-journals (BMJ Open, PeerJ, PLOS ONE and Sage Open) were papers that had been previously rejected by other, selective, journals. The percentage of “resubmissions” was the highest for BMJ Open (67 per cent), and the lowest for PeerJ (37 per cent). Sage Open and PLOS ONE had fewer than half of resubmissions (46 and 49 per cent, respectively).

Solomon’s (2014) study identified the interesting practice highlighted particularly by BMJ Open authors of being encouraged by the editors at the other (highly selective) BMJ journals who have rejected a paper to submit to BMJ Open. Such “cascading” of articles is often associated with mega-journals, although not uniquely – BMC has a similar practice associated with its subject-specific titles. Some mega-journals may be seen as reception journals for such cascading practices, with some arguing that they were set up partly for that purpose (Norman, 2012). The practice of cascading reviews from highly selective journals down to mega-journals is one of Björk’s secondary mega-journal criteria (Björk, 2015), and has been somewhat controversial – particularly with regard to maintaining the anonymity of the reviewer and potential collusion between journals to maximise APC revenue (see e.g. the comments on Clarke, 2013). Nevertheless, such a practice undeniably addresses pressing issues of peer reviewer recruitment, in particular in relation to speeding up the publication process and reducing the extra costs incurred by additional rounds of peer review. For instance, it is estimated that 15 million hours are wasted each year in redundant rounds of peer review incurred by the submit-reject publication cycle (Binfield, 2013).

Other innovative approaches to peer review, such as various forms of open peer review (Ford, 2013), have been practised by some mega-journals, such as F1000 Research. In the case of F1000, Wellen’s (2013) observation of mega-journals being midway between a journal and an archive has particular relevance, since articles are deposited and made public before peer-review. The reviewer reports that follow are then themselves made public. Other mega-journals, such as PeerJ, make available pre-prints through their online platform. More commonly, however, mega-journals practice confidential pre-publication peer review (albeit based on soundness only) but also provide opportunities for post-publication comment and discussion.

The implications of soundness-based peer review for quality have been widely debated (see e.g. the 100+ comments below Kent Anderson’s Scholarly Kitchen blog post on the subject, Anderson, 2010) but addressed empirically in very few studies. Björk and Catani (2016) is a recent exception. They attempted to measure the quality standards of traditional journals and mega-journals as evidenced by citation patterns. They found that mega-journals had a similar impact factor range to average traditional journals, and that their citation patterns clearly differ, in the same way as that of average traditional journals, from citation patterns in top ranking journals. Furthermore, they found that, despite their higher acceptance rate than average conventional journals, mega-journals have shorter tails of articles with two or less citations than average conventional journals. Björk and Catani’s tentative conclusion is that the system of peer review does not appear to lead to differences in eventual citation distributions between mega-journals and average traditional journals. Thus, they argue, soundness-based peer review is the most desirable system for both authors and readers, since it speeds up the publication process and avoids wasteful rounds of submission-rejection. The journals they use for comparison are, however, information science titles which arguably may have different citation characteristics from those within biosciences disciplines. Furthermore, the sample used by Björk and Catani (2016) comprised a number of titles including SAGE Open which covers HSS disciplines. It is, however, unclear precisely how soundness-based peer review applies in an HSS context where there may be somewhat different notions of quality.

MacCallum (2011) remarks on additional challenges for academics associated with moving to a “soundness-only” peer review process even within STM. Such an approach requires a shift in approach to refereeing and it is possible to conceive that it may create problems of consistency across the journal, and, possibly, also difficulties in recruiting reviewers. The latter is a common concern for publishing (Jubb, 2016) but one perhaps magnified for mega-journals not least because of scale – PLOS ONE for example was reported in 2011 to maintain a pool of about 35,000 reviewers (Ware, 2011). The PeerJ business model of membership, requiring authors to also review articles, is a novel approach to addressing this concern (Wellen, 2013). The motivation of peer reviewers remains a particular issue in the OAMJ environment, however, for reasons other than scale. Studies show that at least part of the motivation of peer reviewers is the ability to play a role in shaping their discipline, a certain level of empowerment, as well as furthering the social capital (enhancing personal reputation or networking, for instance) that is derived from being a reviewer (Björk, 2015; Ware, 2011). By narrowing the role of peer reviewers to verifying soundness rather than making judgements of novelty, importance, or interest, mega-journals might be seen as diminishing their role in such a way as might dis-incentivise participation.

This issue is part of the wider question relating to mega-journals: the role of the “gatekeeper”. It is this question which underlies much of the debate on social media about the mega-journal phenomenon, where the role of gatekeepers (editors, editorial board members and peer reviewers) is at times hotly contested (see comments on Anderson, 2010). It remains unclear how the role of the gatekeeper will evolve if we are indeed witnessing a shift from a journal publishing quality control system based on pre-publication peer review which takes into account novelty, importance and interest, to a system based on pre-publication peer review based on soundness only combined with a post-publication, metrics-based, assessment of novelty, importance and interest. This shift appears to represent a move from the “wisdom of the expert” to the “wisdom of the crowd” (albeit, the academic “crowd”), or as one contributor to a social media debate defined it rhetorically, as moving from “oligarchic pre-filtering to a democratic post-filter” (Mr Gunn (pseud), comment on Anderson, 2010). This places the changes implicit in the OAMJ phenomenon in a much wider context of power relationships within disciplinary communities, an area that would merit much further investigation.

Whilst the value of such a shift has been debated, little empirical work has yet been carried out on its implications. Some work has been produced on the efficacy of altmetrics, one of Björk’s secondary criteria, as indicators of quality compared with traditional measures of citation (Sud and Thelwall, 2014; Thelwall et al., 2013), with variable results. However, critics of mega-journals argue forcefully that such metrics are still underdeveloped and cannot in any case substitute for the filtering provided by expert peer review (comments on Anderson, 2010). The role of article-level metrics as a second line of quality control (post-publication), following soundness-only peer review (pre-publication), is certainly under-explored in the literature and merits further investigation and discussion.

7. OA economic model

The fourth major criterion for defining mega-journals is OA based on an APC business model. The model is, of course, not unique to mega-journals since it is applicable to many so-called “Gold” OA journals. It is, however, particularly important for mega-journals, especially those published in a commercial setting, since it is a model which scales – the close coupling of publication volume with financial income facilitates rapid content growth (Davis, 2014). Most titles classed as mega-journals by a range of commentators, therefore, use this model. It is, nevertheless, worth pointing out that Gold OA does not necessarily involve APCs. In fact, the majority of journals listed in the Directory of Open Access Journals are not funded by APCs; alternative models, including sponsorship and membership models, are also widely used. The Open Library of the Humanities, for example, is currently funded by a combination of the two. PeerJ’s membership model is an interesting experiment that has been widely discussed; however, their launch of an APC alternative to membership in 2016 seems to indicate that the APC model is understood and demanded by many authors in preference to membership.

The setting of the PeerJ APC at US$695 raises the important question of APC levels. Mega-journals have mostly charged “moderate APCs” (Björk, 2015). APCs typically range from US$195 to US$1,950 and generally cluster around the US$1,300 mark (Björk, 2015; Björk and Solomon, 2014). This is considerably cheaper than other APCs, particularly those of hybrid subscription-OA journals (Björk and Solomon, 2014; Pinfield et al., 2015, 2017). To place mega-journals’ fees in context, their average APC fee of US$1,300 is more than the average APC for fully OA journals, in the region of US$900, but substantially less than that of top ranking fully OA journals (ranging from US$2,500 to US$5,000), or hybrid journals (US$3,000) (Solomon and Björk, 2012).

Relatively low APCs have given rise to controversy. Some have regarded them as unsustainable, with, for example, Ware and Mabe (2015) observing that they are below the average cost of publication. This is disputed by OA publishers quoted by Van Noorden (2013) who estimate publication costs to be in the “low hundreds of dollars” (Peter Binfield of PeerJ quoted by Van Noorden, 2013). Eve (2015), discussing traditional publishers who have continued to collect large amounts of revenue despite the budgetary pressures facing libraries, states that “they may have a different idea, in the mind of shareholders, as to what ‘sustainable’ actually means”. Costs differ depending on a variety of factors, notably rejection rates and editorial standards, but the levels of costs and the variables affecting them are not well documented. Recent research does, however, show that there is a correlation between APC price and quality, where quality is measured by citation rates (Björk and Solomon, 2015; Pinfield et al., 2017). This may help to reinforce the perceived association between mega-journals, which have lower APCs, and lower quality.

Nevertheless, APC prices are not simply set based on production costs, since prices in a commercial environment are also determined by what the market will bear. There are clearly variations across different disciplines and also different countries in terms of the availability of funds to pay APCs. External research funding in the humanities, and to some extent social sciences, is sparse, in comparison to the STM disciplines. This is thought to be intensifying the pressure on publishers to keep APCs in those lesser-funded disciplines lower than in the higher funded disciplines, where APC fee is not a decisive factor in authors’ publication choices (Solomon, 2014). This is likely to be the key reason for the APCs of SAGE Open changing from US$695 in 2011 (launch price) to US$395 in 2012 to US$99 in 2013 and back up to US$395 in 2016. The discounted prices between 2012 and 2015 were arguably an attempt to attract authors from these lesser-funded areas. Disciplinary differences in terms of publication funds were highlighted in Solomon’s (2014) study, with only 11 per cent of SAGE Open authors reporting having used grant funding to pay for their publication in SAGE Open, whereas 52 per cent of PLOS ONE authors, 30 per cent of BMJ Open and 35.5 per cent of PeerJ authors were able to use grant funding to pay for their publication in mega-journals. Furthermore, 62 per cent of SAGE Open authors in Solomon (2014) – about double the proportion of PeerJ authors – reported using personal funds (i.e. their own money rather than university/departmental/grant money), thus highlighting once more the overall lack of external funding in HSS.

As well as the APC business model allowing for rapid scaling of journal output and therefore enabling the mega-journal model to be feasible, OAMJs also create the possibility of economies of scale, or at least efficiencies of various kinds. Instead of having to manage large portfolios of journal titles each with differing criteria for inclusion, publishers can create streamlined, integrated platforms for a single journal (Wellen, 2013). Whilst some of these economies (such as single publishing platforms) may be achievable with multiple titles, economies of scale are more likely to emerge if there is a single large scale product through standardisation and simplification of business processes, quality assurance approaches and decision making. Such an approach has been criticised as “bulk publishing” (Anderson, 2010; Butler, 2008; Davis, 2008). PLOS ONE, in particular, has been labelled a “cash cow” (Butler, 2008), with PLOS characterised as pursuing “the path of least resistance for author-pays publishing” which is “publish as many papers as possible”, creating the danger, it is argued, of the organisation being seen as a “low quality, high volume publisher” (Anderson, 2010). Anderson (2010) links such an approach (although not PLOS itself) to a “predatory set of high-volume, author-pays journals that provide venues for weak studies”. In the case of PLOS, however, it has also been observed that the profits created by the PLOS ONE model enable it to cross-subsidise the more highly selective titles produced by PLOS (Butler, 2008). Davis (2008) observes that this involves “some brand coat-tailing”, with the highly selective titles conferring a more respected brand status on the mega-journal.

8. Attitudes to OAMJs in the academic community

The preceding sections have reviewed the current OAMJ landscape with regard to the major definitional characteristics of mega-journals. However, a final key area that arose from the review of the literature was markedly varied attitudes towards mega-journals in the academic community.

Several studies have included some discussion of mega-journal author characteristics. Solomon (2014) found that the four mega-journals included in his study attracted authors from across the world, with exactly a quarter of authors being based in the USA. There were, however, notable variations between journals: 40 per cent of authors who published in PeerJ and Sage Open were from the USA, whereas BMJ Open attracted more articles from British and Australian authors than any other country. PeerJ authors reported having slightly more experience with OA publishing than other authors. The international status of mega-journals was also confirmed in Burns (2015) – although his study is limited to only 49 papers in PeerJ – and in Fein (2013), who found that the USA were the primary contributor to PLOS ONE between 2007 and 2011, authoring 44 per cent of all articles, followed by the UK (10 per cent), Germany (10 per cent), China (9 per cent) and France (8 per cent). In Burns (2015), each article had a median of four authors per article and, and just under 43 per cent of articles were co-authored by authors based in at least two different countries.

Solomon (2014) also offers the richest picture of authors’ perceptions of mega-journals. In his survey of 665 authors published in four mega-journals (BMJ Open, PeerJ, PLOS ONE and Sage Open), he found that the two factors most likely to influence the decision to publish in an OAMJ were the quality of the journal (25.7 per cent) and speed of review (12.6 per cent). Journal impact factors (JIFs) appeared to be particularly important for PLOS ONE authors while the reputation of the publisher was highly valued for both BMJ Open and Sage Open authors; PeerJ authors seemed to value most the review criteria, the speed of the review and the OA nature of the journal. Only 3 per cent of authors thought the broad scope of the journal positively influenced their decision to select it. The APC model was viewed as a negative factor for authors who had published in PLOS ONE and BMJ Open – which in fact charged the highest fees of the four journals surveyed. In contrast, Sage Open authors reported that the APC positively (albeit moderately) influenced their decision to select the journal. The membership model offered by PeerJ was perceived more positively than any of the APC models. Solomon’s findings are supported to a certain extent by the results of a BMJ Open author survey reported by Sands (2014). Based on the responses of 401 authors, Sands found that the most selected reasons for publishing in the journal were its OA status (selected by 59 per cent of authors), the BMJ brand (50 per cent), speed of review (37 per cent) and the reputation of the journal (34 per cent). The journal’s impact factor was only the ninth most commonly given reason (13 per cent).

The informal literature also suggests that authors are sometimes drawn to OAMJs because of their review policies, which allow for the submission of articles that might not easily find a home elsewhere, particularly the reporting of preliminary, null, and replication results (Tredennick, 2013; comments on Anderson, 2010). Others appear to be publishing in the journal on principle, viewing their submission as an altruistic act in support of a challenge to the scholarly publishing status quo. Gurnhill (2015), for example, reports a researcher’s explanation for choosing to publish a second paper in PeerJ despite the first not being recognised by an institutional promotion process. While her reasons include the journal’s OA status and low publishing cost, she also notes the wider context:

Some might say my experience shows I should change my publishing strategy. I say it shows we should change how we evaluate researchers […] I support the ways in which PeerJ is trying to change our broken scholarly communication system

(McKiernan, in Gurnhill, 2015).

While these sources focus on the reasons to publish in an OAMJ, a substantial amount of the informal material offers the opposing perspective: why authors choose not to submit to mega-journals. The tone of much of this discussion is striking. Take, for example, a tweet quoted in the Early Career Ecologists blog (Tredennick, 2013): “I’ve heard now from several tenured or near-tenured profs that publishing in @plosone was career suicide. Thanks a lot Open Access”. Discussions on various online fora repeatedly link the perceived poor reputation of OAMJs to their review policies, which mean the journals are regarded (by more senior academics) as venues for work too poor in quality or lacking in significance to be published anywhere more selective (and by extension more reputable). As one researcher put it on Twitter, “for better or ill most of my peers look at PLOS ONE as the dumping ground for papers rejected at real journals” (Ted Hart, 2013, quoted in Tredennick, 2013). It is this perception that underpins the three “dangers” of publishing in mega-journals described in an Impactstory blog: that “co-authors won’t want to publish in megajournals”, that “no-one in my field will find out about it”, and that “my CV will look like I couldn’t publish in ‘good’ journals” (Impactstory, 2014). While the blog offers suggestions for counteracting these difficulties, in particular the use of article-level metrics to demonstrate the impact of mega-journal articles, it seems clear from the wider online discussions that significant numbers of researchers either perceive OAMJs as disreputable, or worry about the consequences of others taking that view.

A key element to this debate is the role of the JIF. While debates over the merit of the JIF have been long and heated (Brembs et al., 2013; Davis, 2013; Pendlebury, 2009) it is generally recognised that JIFs strongly influence perceptions of journal quality (Saha et al., 2003) and thus, by extension, perceptions of researcher output quality. This is perhaps of most significance for early career researchers applying for academic positions or tenure, and who often perceive the JIF of the journals in which they have published as a key factor in the selection process (Eisen, 2012). Several mega-journal publishers, including PLOS, have publicly disavowed the JIF, signing the DORA declaration[2] and pledging not to use the JIF to promote the journal or its articles. In fact, the mega-journal concept aligns poorly with the JIF, not only because of the broad scope of such journals, but also because the OAMJ review policy does not deliberately reject articles on the basis of perceived lack of importance or significance (which implicitly would apply to those articles considered less likely to garner citations). It has been suggested, however, that PLOS ONE’s rapid growth in output was actually fuelled in large part by its being awarded a relatively high JIF (Davis, 2014), and some commentators have noted that the recent decline in output correlates with the journal’s declining impact factor (this latter point is disputed, however, with a range of other factors including falling research funding and increased competition from other OAMJs being identified) (Davis, 2013). It is likewise interesting to note that Scientific Reports’ output almost tripled in the year after its impact factor rose from 2.927 to 5.078. While the precise nature of the link between JIF and submission rates remains unclear, it appears likely that a mega-journal’s JIF does influence perceptions of OAMJs in the academic community. Thus, while many commentators, and indeed many OAMJs, view the JIF as a flawed metric detrimental to scholarly communication in general, it can be argued that its embeddedness in academic culture in fact incentivises OAMJ publishers to pursue ever higher JIFs. It is perhaps understandable then that some OAMJs (e.g. BMJ Open, Scientific Reports and Medicine) display their JIF prominently on the journal home or “about” pages.

9. Discussion

This review suggests that the academic literature has not yet entirely caught up with the mega-journal phenomenon. While there has been a vast amount published in areas closely related to mega-journals – particularly regarding OA, and the role of peer review – the body of formal literature specifically discussing mega-journals is relatively sparse, and centred around a few core authors. In contrast, the non-academic literature – blogs, mailing lists, etc. – is more vocal, featuring a range of views and, at times, heavily polarised debate.

A key challenge in interpreting and finding meaning in this discussion is unravelling the interconnected factors at play. Björk’s mega-journal criteria, which correspond to thematic areas used here partly to frame this review, are themselves, of course, interrelated, although it is not clear precisely how: is “large publication volume” the primary aim of OAMJs (which is clearly facilitated by a broad subject scope and an inclusive review policy), or is it, instead, a by-product of implementing a novel and more “democratic” approach to scholarly publishing? In practice, the answer is likely to differ for the various OAMJ publishers. In some cases, for example, OAMJs are evidently part of a cascade model in which publishers appear to be trying to retain articles rejected from their own highly selective titles; in others, the intention appears to change approaches to peer review in particular or publishing models in general. In other cases, there may be a desire to experiment, or to imitate other publishers in this area, without clear expectations of likely outcomes. It is important to note, however, that mega-journals are only one of a range of innovations within the scholarly communication system, and therefore co-exist on a spectrum of innovation that ranges from OA publishing platforms such as SciELO[3], and data and broad research output repositories such as RIO[4] at one end, via pre-print servers and institutional repositories to traditional subscription journal publishing. Determining how mega-journals will interact with and shape the development of these other innovations remains a key challenge, and one that requires substantial future research.

Large mega-journals that utilise an APC-based model clearly have the potential to raise significant amounts of revenue, with even relatively modest APCs apparently more than covering publishing costs. One interesting consequence of this is that it suggests a potential solution to a key problem associated with APC-based OA publishing: how it might work successfully for a highly selective journal, where the costs of rejecting articles are high but do not generate any APC income. It is ostensibly difficult to see how high-prestige journals can operate in an OA environment where APCs do not cover costs of high numbers of rejections. The “stand-alone” model for high-rejection rate journals in an OA environment is often seen as submission charging combined with APCs but this is viewed as unpalatable by many (Ware, 2010). However, the economies of scale, or at least efficiencies through standardisation and simplicity, created by mega-journals potentially give rise to an alternative model which addresses this problem, in which a highly selective title (or titles) sits in a tiered, mutually supporting relationship with a mega-journal (Figure 2). In this model, the mega-journal provides a financial subsidy to the highly selective titles enabling them to sustain the costs of rejecting a large proportion of submissions. At the same time the mega-journal derives a reputational subsidy from the high-prestige title, helping to ensure that it continues to attract authors. This can work most effectively, of course, where there is a strong brand association between the high-prestige title and the mega-journal, perhaps most obviously in the sharing of a name. The PLOS titles exemplify this, with comments about the “cash cow” PLOS ONE “brand tail-coating” PLOS Medicine and PLOS Biology expressing these ideas of financial and reputational subsidy (albeit it in implicitly critical terms) (Davis, 2008). The Nature titles have a similar tiered arrangement (with three layers, i.e. the highly selective Nature, the selective and OA Nature Communications, and Nature’s OAMJ, Scientific Reports) but the top tier is not currently OA (and the current business incentive to flip it to an OA model would not be high when it is self-sustaining as a subscription title). Mega-journals operating independently (e.g. PeerJ) and those where there is not such a clear reputational association (e.g. Heliyon) do not work in this way, and it will be interesting to observe whether this creates different development paths for these journals (it may arguably have contributed to the demise of SpringerPlus). The tiered model also assumes that highly selective journals continue to play a significant role in scholarly communication, something that is no longer the certainty it once was. Many mega-journal advocates support OAMJs precisely because they appear to depart from the current hierarchical structure of scholarly publishing based on “subjective” selectivity.

Mega-journals are not only potentially disruptive in terms of altering the way research findings are assessed and communicated; they are also disrupting academic culture itself, traditionally rhythmed by the rounds of submission and rejection that characterise the academic publication cycle. Indeed, how OAMJs fit into or influence workflows of authors is a key question for scholarly communication researchers. For academic content consumers, the vast size of mega-journals’ output makes the journal itself almost irrelevant as a unit. Recent research into academic information seeking behaviour suggests ever increasing reliance on search engine and database use, and ever less direct engagement with journal or even publisher platforms (Housewright et al., 2013). Thus, while OAMJ websites typically offer browsing functionality based on sub-disciplinary hierarchies, it seems reasonable to suggest that these will prove to be less important than the effective integration of mega-journal content into the most commonly used aggregation or discovery services. Of perhaps greater significance than general discoverability is the filtering of OAMJ content. As already noted, a key argument put forward by those sceptical of the mega-journal phenomenon is that the removal of pre-publication filtering for significance or impact will result in a form of information overload, with researchers unable to identify important research among a sea of low quality outputs. Article-level metrics do, of course, offer one solution to this problem, although there remain doubts about the effectiveness of these measures (Pendlebury, 2009). It may be that other existing solutions – for example overlay journals, or community recommendation services like F1000 Prime – will come to play a key role in addressing this challenge.

Perhaps the key question is the extent to which the academic community comes to embrace the underlying philosophy that informs OAMJ review policies – that the academy is best served by the dissemination of all scientifically sound research. Whilst a substantial amount of anecdotal evidence suggests that publishing in mega-journals can be perceived as having negative implications for a researcher’s CV, it must also be noted that PLOS ONE is now the largest journal in the world; clearly, a significant number of researchers are not persuaded of the dangers of publishing in a mega-journal. It seems reasonable to suggest that recent movements towards open science and data practices, combined with the increasing number of researchers accustomed to self-archiving and other OA models, will bolster support for approaches to publishing perceived to be more open or “democratic”. That this approach facilitates the dissemination of previously difficult-to-publish material – negative results, replication studies, etc. – might also prove an important factor in the acceptance of “objective” peer review as a new academic standard.

As already observed, the relative lack of detailed research into the mega-journal phenomenon means understanding their current place in the publishing landscape is challenging. Determining the role OAMJs may play in the future is an even more difficult task. Clearly a range of factors will influence this future, including the extent to which funder mandates can drive OA publishing in general, the role and perceived importance of the JIF, how well technology can support researchers’ information seeking and filtering, and the emergence (or not) of competing models. True mega-journal advocates argue that OAMJs offer a future in and of themselves, with the entirety of scholarly output one day reviewed for soundness and published in “giant content generators”, then allowing the community in general to decide what is important. Alternatively, it might also be that “peak mega-journal” has already been reached and that new models will emerge to sustain traditional journal publishing, or that the JIF culture is too ingrained in academic practices for “scientific soundness” review policies to succeed. Another possibility is that mega-journals might prove most valuable as a means of effectively challenging the status quo and facilitating the transition to even more radical departures from traditional publishing models. What is clear, however, is that the potential significance of mega-journals, along with the widely varying nature of perspectives on them, demonstrate that the OAMJ phenomenon merits further systematic study.

10. Conclusion

OAMJs now appear to be an established part of the scholarly communication landscape. Their combination of new approaches to scale, scope and quality – in conjunction with an OA business model – mean that they have given rise to debate and controversy, some of it heated. This has been played out in the formally published literature but also in social media and e-mail discussion lists. At times, the discussions (and the apparent assumptions behind them) have had a dichotomous character, with OAMJs being pictured as representing either a paradigm for the future of scholarly communication or as a retrograde decline in publishing standards. However, the wide range of perspectives on mega-journals and the complexity of the issues to which they give rise serve to demonstrate that characterising the debate in such binary terms may not be helpful. In one intervention in a social media debate, Esposito has tried to steer a middle course:

WE DO NOT HAVE TO CHOOSE. We can have PLoS AND PLoS One AND Lancet, BMC, JACS, Science, Nature, AHA, and many, many others. Stop fighting. Let the people behind these various services and publications endeavor to make their case with authors and readers. This is not a political matter, but one of the unfolding of the marketplace

(Esposito, comment on Anderson, 2010).

Mega-journals themselves are heterogeneous in terms of their characteristics – they are not necessarily a homogenous genre. Key components of OAMJs can be (and are being) implemented and combined in different ways. It remains to be seen, however, whether all this will simply add to the range of choices for authors and mega-journals find an accommodation with conventional selective journals, or whether mega-journals signal a reshaping of scholarly communication that will ultimately change its fundamental character.

Figures

Article numbers in the 11 largest mega-journals

Figure 1

Article numbers in the 11 largest mega-journals

Tiered model of journal subsidy

Figure 2

Tiered model of journal subsidy

Peer-reviewed articles focussing on OAMJs (as of May 2016)

Year Authors Article title Journal title Topics discussed
2013 Wellen “Open access, megajournals, and MOOCs: On the political economy of academic unbundling” Sage Open Analysis of various open access initiatives, including mega-journals, from the perspective of “unbundling” of functions, including distinctive approach to peer review, and journal business models. Also discusses impact new post-publication metrics and their link with an increased emphasis on academic productivity and regulation
2013 Fein “Multidimensional journal evaluation of PLOS ONE” Libri Bibliometric study of PLOS ONE attempting to evaluate the journal through various indicators, including journal output, journal content, citations and journal management
2014 Solomon “A survey of authors publishing in four megajournals” PeerJ Study based on survey data exploring the profile of researchers publishing in four open-access mega-journals (BMJ Open, PeerJ, PLOS ONE and Sage Open)
2014 Xia “An examination of two Indian megajournals” Learned Publishing Case study looking at the editorial practices of two Indian open-access mega-journals (International Journal of Advanced Research in Computer Science and Software Engineering and International Journal of Engineering Research and Applications)
2015 Burns “Characteristics of a megajournal: a bibliometric case study” Journal of Information Science Theory and Practice Bibliometric study of a small sample of articles published in PeerJ looking at authors’ profiles, peer review, speed of publication and alternative metrics
2015 Björk “Have the ‘mega-journals’ reached the limits to growth?” PeerJ Paper looking at the defining characteristics of open-access mega-journals, including outputs volume, publication charges, acceptance rates, and speed of publication
2016 Björk and Catani “Peer review in megajournals compared with traditional scholarly journals: does it make a difference?” Learned Publishing Research paper investigating whether soundness-based peer review impacts on citation rates

Criteria for defining “mega-journal”

Primary criteria Secondary criteria
Big publishing volume or aiming for it Moderate APC
Peer review of scientific soundness only High-prestige publisher
Broad subject area Academic editors
Full open access with APC Reusable graphics and data
Altmetrics
Commenting
Portable reviews
Rapid publication

Aggregated list of OAMJs

Norman (2012) Binfield (2013) Björk (2015) Ware and Mabe (2015) megajournals.info (2016) Wikipedia (2016) Total
ACS Omega 2
AIP Advances 4
Biology Open 6
BMC Medicine 1
BMJ Open 6
BMJ Open Diabetes Research & Care 3
BMJ Open Respiratory Research 3
Cell Reports 1
CMAJ Open 2
Cogent Chemistry 1
Cogent Economics & Finance 1
Cogent Engineering 1
Cogent Series 1
Collabra 1
Cureus 3
De Gruyter Open imprint 1
Ecosphere 2
Elementa 3
EPJ-Plus 1
F1000 Research 3
Facets (series) 1
FEBS Open Bio 5
Frontiers in […] (series) 2
G3: Genes, Genomes, Genetics 4
Heliyon 3
IEEE Access 5
Journal of Engineering 2
mBio 2
Open Biology 1
Open Heart 2
Open Library of the Humanities 4
Optics Express 2
PeerJ 5
PLOS ONE 6
QScience Connect 2
RIO (Research Ideas and Outcomes) 1
Royal Society Open Science 3
SAGE Open 5
Sage Open Medicine 3
Science Advances 1
Scientific Reports 6
SpringerPlus 5
The Scientific World Journal 2
The Winnower 1
Zootaxa 2

Notes

1.

Data retrieved from Scopus (25.05.2016), and limited to publications indexed as “Articles”.

Appendix

Table AI

References

Anderson, K. (2010), “PLoS’ squandered opportunity – their problems with the path of least resistance”, The Scholarly Kitchen, 27 April, available at: http://scholarlykitchen.sspnet.org/2010/04/27/plos-squandered-opportunity-the-problem-with-pursuing-the-path-of-least-resistance/ (accessed 27 January 2016).

Anderson, K. (2014), “Can mega-journals maintain boundaries when they and their customers align on ‘publish or perish’?”, The Scholarly Kitchen, 29 January, available at: http://scholarlykitchen.sspnet.org/2014/01/29/can-mega-journals-maintain-boundaries-when-they-and-their-customers-both-embrace-publish-or-perish/ (accessed 9 December 2015).

Binfield, P. (2012), “PLoS ONE – a personal farewell”, PLoS ONE Blog, 18 May, available at: http://blogs.plos.org/everyone/2012/05/18/plos-one-a-personal-farewell/ (accessed 10 December 2015).

Binfield, P. (2013), “Open access megajournals – have they changed everything?” , Creative Commons Blog, 23 October, available at: http://creativecommons.org.nz/2013/10/open-access-megajournals-have-they-changed-everything/ (accessed 29 December 2015).

BioMed Central (2016), “The BMC-series journals”, available at: www.biomedcentral.com/p/the-bmc-series-journals (accessed 15 February 2016).

Björk, B.-C. (2015), “Have the ‘mega-journals’ reached the limits to growth?”, PeerJ, Vol. 3, Article ID e981, pp. 1-11.

Björk, B.-C. and Catani, P. (2016), “Peer review in megajournals compared with traditional scholarly journals: does it make a difference?”, Learned Publishing, Vol. 29 No. 1, pp. 9-12.

Björk, B.-C. and Hedlund, T. (2015), “Emerging new methods of peer review in scholarly journals”, Learned Publishing, Vol. 28 No. 2, pp. 85-91.

Björk, B.-C. and Solomon, D. (2014), “Developing an effective market for open access article processing charges”, Wellcome Trust, London, doi:10.6084/m9.figshare.951966.

Björk, B.-C. and Solomon, D. (2015), “Article processing charges in OA journals: relationship between price and quality”, Scientometrics, Vol. 103 No. 2, pp. 373-385.

Brembs, B., Button, K. and Munafò, M. (2013), “Deep impact: unintended consequences of journal rank”, Frontiers in Human Neuroscience, Vol. 7, pp. 291-302.

Buriak, J.M. (2015), “Mega-journals and peer review: can quality and standards survive?”, Chemistry of Materials, Vol. 27 No. 7, p. 2243.

Burns, C.S. (2015), “Characteristics of a megajournal: a bibliometric case study”, Journal of Information Science Theory and Practice, Vol. 3 No. 2, pp. 16-30.

Burns, C.S. (2016), “Megajournals and the impact factor”, Social Informatics Blog, 25 February, available at: https://socialinfoblog.wordpress.com/author/csburns/ (accessed 15 March 2016).

Busby, L. (2015), “A matter of size”, The Serials Librarian, Vol. 69 Nos 3-4, pp. 233-239.

Butler, D. (2008), “PLoS stays afloat with bulk publishing”, Nature News, Vol. 454 No. 11, available at: www.nature.com/news/2008/080702/full/454011a.html (accessed 12 March 2016).

Clarke, M. (2013), “Game of papers: eLife, BMC, PLoS and EMBO announce new peer review consortium”, The Scholarly Kitchen, 15 July, available at: https://scholarlykitchen.sspnet.org/2013/07/15/game-of-papers-elife-bmc-plos-and-embo-announce-new-peer-review-consortium/ (accessed 31 May 2016).

Cronin, B. (2012), “The resilience of rejected manuscripts”, Journal of the American Society for Information Science and Technology, Vol. 63 No. 10, pp. 1903-1904.

Davis, P. (2008), “Bulk publishing keeps PLoS afloat”, The Scholarly Kitchen, 7 July, available at: https://scholarlykitchen.sspnet.org/2008/07/07/bulk-publishing-keeps-plos-afloat/ (accessed 11 May 2016).

Davis, P. (2013), “The rise and fall of PLOS ONE’s impact factor”, The Scholarly Kitchen, 20 June, available at: https://scholarlykitchen.sspnet.org/2013/06/20/the-rise-and-fall-of-plos-ones-impact-factor-2012-3-730/ (accessed 15 November 2015).

Davis, P. (2014), “PLOS ONE output falls following impact factor decline”, The Scholarly Kitchen, 7 March, available at: https://scholarlykitchen.sspnet.org/2014/03/07/plos-one-output-falls-following-impact-factor-decline/ (accessed 18 May 2016).

Eisen, M. (2012), “The widely held notion that high-impact publications determine who gets academic jobs, grants and tenure is wrong. Stop using it as an excuse”, it is NOT junk, 4 February, available at: www.michaeleisen.org/blog/?p=911 (accessed 20 November 2015).

Epstein, S. (2016), “A few words on sound science, megajournals, and an announcement about SpringerPlus”, SpringerOpen Blog, 13 June, available at: http://blogs.springeropen.com/springeropen/2016/06/13/a-few-words-on-sound-science-megajournals-and-an-announcement-about-springerplus/ (accessed 15 July 2016).

Eve, M.P. (2015), “Clarifying a few facts for Elsevier and their response to Lingua”, 5 November, available at: www.martineve.com/2015/11/05/clarifying-a-few-facts-for-elsevier-and-their-response-to-lingua/ (accessed 16 December 2015).

Fein, C. (2013), “Multidimensional journal evaluation of PLOS ONE”, Libri, Vol. 63 No. 4, pp. 259-271.

Ford, E. (2013), “Defining and characterizing open peer review: a review of the literature”, Journal of Scholarly Publishing, Vol. 44 No. 4, pp. 311-326.

Frontiers (n.d.), “‘Frontiers in’ journal series”, available at: www.frontiersin.org/ (accessed 15 February 2016).

Gargouri, Y., Lariviere, V., Gingras, Y., Carr, L. and Harnad, S. (2012), “Green and gold open access percentages and growth, by discipline”, 17th International Conference on Science and Technology Indicators (STI), ENID, Montreal, CA, pp. 285-292.

Genetics Society of America (n.d.), “G3: Genes, Genomes, Genetics mission”, available at: www.g3journal.org/site/misc/about.xhtml (accessed 12 March 2016).

Grimme, S. (2015), “New open access journal will publish across all disciplines”, 8 January, available at: www.elsevier.com/connect/new-open-access-journal-will-publish-across-all-disciplines (accessed 6 January 2016).

Guédon, J.-C. (2015), “Re: Elsevier: trying to squeeze the virtual genie back into the physical bottle”, 26 May, available at: http://mailman.ecs.soton.ac.uk/pipermail/goal/2015-May/003377.html (accessed 24 November 2015).

Gurnhill, G. (2015), “Accessibility and added value: a personal perspective on publishing in PeerJ by Erin McKiernan”, PeerJ Blog, 24 April, available at: https://peerj.com/blog/post/115284877728/accessibility-and-added-value-a-personal-perspective-on-publishing-in-peerj-by-erin-mckiernan/ (accessed 3 March 2016).

Housewright, R., Schonfeld, R.C. and Wulfson, K. (2013), “Ithaka S+R|JISC|RLUK UK Survey of Academics 2012”, London, available at: www.rluk.ac.uk/wp-content/uploads/2014/02/UK_Survey_of_Academics_2012_FINAL.pdf (accessed 4 January 2016).

Impactstory (2014), “The 3 dangers of publishing in ‘megajournals’ – and how you can avoid them”, Impactstory Blog, 3 April, available at: http://blog.impactstory.org/the-3-dangers-of-publishing-in-megajournals-and-how-you-can-avoid-them/ (accessed 6 January 2016).

Jubb, M. (2016), “Peer review: the current landscape and future trends”, Learned Publishing, Vol. 29 No. 1, pp. 13-21.

MacCallum, C.J. (2011), “Why ONE is more than 5”, PLoS Biology, Vol. 9 No. 12, pp. 1-4.

megajournals.info (2016), “Open access megajournals”, available at: https://megajournals.info/ (accessed 18 January 2016).

Norman, F. (2012), “Megajournals”, Trading Knowledge Blog, 9 July, available at: http://occamstypewriter.org/trading-knowledge/2012/07/09/megajournals/ (accessed 31 May 2016).

OLH (2013), “Open Library of Humanities: about”, Open Library of Humanities, 25 January, available at: https://web.archive.org/web/20130125064822/http://www.openlibhums.org/about/ (accessed 1 February 2014).

Pendlebury, D.A. (2009), “The use and misuse of journal metrics and other citation indicators”, Archivum Immunologiae et Therapiae Experimentalis, Vol. 57 No. 1, pp. 1-11.

Pinfield, S. (2015), “Making open access work”, Online Information Review, Vol. 39 No. 5, pp. 604-636.

Pinfield, S., Salter, J. and Bath, P.A. (2015), “The ‘total cost of publication’ in a hybrid open-access environment: institutional approaches to funding journal article-processing charges in combination with subscriptions”, Journal of the Association for Information Science and Technology, Vol. 67 No. 7, pp. 1751-1766, doi: 10.1002/asi.23446.

Pinfield, S., Salter, J. and Bath, P.A. (2017), “A ‘gold-centric’ implementation of open access: hybrid journals, the ‘total cost of publication’ and policy development in the UK and beyond”, Journal of the Association for Information Science and Technology (in press), available at: http://eprints.whiterose.ac.uk/96336/ (accessed 26 January 2017).

PLOS ONE (n.d.), “PLOS ONE: journal information”, PLOS ONE website, available at: www.plosone.org/static/information.action (accessed 15 February 2014).

Powell, K. (2016), “Does it take too long to publish research?”, Nature, Vol. 530 No. 7589, pp. 148-151.

Saha, S., Saint, S. and Christakis, D.A. (2003), “Impact factor: a valid measure of journal quality?”, Journal of the Medical Library Association (JMLA), Vol. 91 No. 1, pp. 42-46.

Sands, R. (2014), “Comparing the results from two surveys of BMJ Open authors”, BMJ Blogs, 9 May, available at: http://blogs.bmj.com/bmjopen/2014/05/09/comparing-the-results-from-two-surveys-of-bmj-open-authors/ (accessed 12 December 2015).

Solomon, D.J. (2014), “A survey of authors publishing in four megajournals”, PeerJ, Vol. 2, Article ID e365, pp. 1-15.

Solomon, D.J. and Björk, B.-C. (2012), “A study of open access journals using article processing charges”, Journal of the American Society for Information Science and Technology, Vol. 63 No. 8, pp. 1485-1495.

Sud, P. and Thelwall, M. (2014), “Evaluating altmetrics”, Scientometrics, Vol. 98 No. 2, pp. 1131-1143.

Sugimoto, C.R., Larivière, V., Ni, C. and Cronin, B. (2013), “Journal acceptance rates: a cross-disciplinary analysis of variability and relationships with journal measures”, Journal of Informetrics, Vol. 7 No. 4, pp. 897-906.

SV-Pow (2015), “Have we reached peak megajournal?”, SV-Pow Blog, 29 May, available at: http://svpow.com/2015/05/29/have-we-reached-peak-megajournal/ (accessed 6 January 2016).

Thelwall, M., Haustein, S., Larivière, V. and Sugimoto, C.R. (2013), “Do Altmetrics work? Twitter and ten other social web services”, PLoS ONE, Vol. 8 No. 5, pp. 1-7.

Tredennick, A. (2013), “Why I published in PLoS ONE. And why I probably won’t again for awhile”, Early Career Ecologists, 21 March, available at: https://earlycareerecologists.wordpress.com/2013/03/21/why-i-published-in-plos-one-and-why-i-probably-wont-again-for-awhile/ (accessed 7 December 2015).

Van Noorden, R. (2013), “Open access: the true cost of science publishing”, Nature, Vol. 495 No. 7442, pp. 426-429.

Wakeling, S., Willett, P., Creaser, C., Fry, J., Pinfield, S. and Spezi, V. (2016), “Open-access mega-journals: a bibliometric profile”, PLoS ONE, Vol. 11 No. 11, pp. 1-26.

Ware, M. (2010), “Submission fees – a tool in the transition to open access?”, Mark Ware Consulting Ltd for the Knowledge Exchange, Bristol, available at: www.markwareconsulting.com/wordpress/wp-content/uploads/2010/12/KE_Submission_fees_Short_Report_2010-11-25-1.pdf (accessed 10 December 2015).

Ware, M. (2011), “Peer review: recent experience and future directions”, New Review of Information Networking, Vol. 16 No. 1, pp. 23-53.

Ware, M. and Mabe, M. (2015), The STM Report, International Association of Scientific, Technical and Medical Publishers, The Hague, available at: www.markwareconsulting.com/the-stm-report/ (accessed 11 December 2015).

Wellen, R. (2013), “Open access, megajournals, and MOOCs: on the political economy of academic unbundling”, SAGE Open, Vol. 3 No. 4, pp. 1-16.

Wikipedia (2016), “Mega journal”, available at: https://en.wikipedia.org/wiki/Mega_journal (accessed 15 February 2016).

Wiser, J. (2015), “The future of serials: a publisher’s perspective”, Serials Review, Vol. 40 No. 4, pp. 238-241.

Xia, J. (2014), “An examination of two Indian megajournals”, Learned Publishing, Vol. 27 No. 3, pp. 195-200.

Zahedi, Z., Costas, R. and Wouters, P. (2014), “How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications”, Scientometrics, Vol. 101 No. 2, pp. 1491-1513.

Zhang, Z.-Q. (2006), “The making of a mega-journal in taxonomy”, Zootaxa, Vol. 1385 No. 1, pp. 67-68.

Acknowledgements

This paper was produced as part of the Open-Access Mega-Journals and the Future of Scholarly Communication project funded by the UK Arts and Humanities Research Council (AHRC), AH/M010643/1.

Corresponding author

Stephen Pinfield can be contacted at: s.pinfield@sheffield.ac.uk

Related articles