Hobson’s choice: the effects of research evaluation on academics’ writing practices in England

Sharon Mcculloch (Department of Linguistics and English Language, Lancaster University, Lancaster, UK)

Aslib Journal of Information Management

ISSN: 2050-3806

Article publication date: 18 September 2017

2336

Abstract

Purpose

The purpose of this paper is to examine the influence of research evaluation policies and their interpretation on academics’ writing practices in three different higher education institutions and across three different disciplines. Specifically, the paper discusses how England’s national research excellence framework (REF) and institutional responses to it shape the decisions academics make about their writing.

Design/methodology/approach

In total, 49 academics at three English universities were interviewed. The academics were from one Science, Technology, Engineering and Mathematics discipline (mathematics), one humanities discipline (history) and one applied discipline (marketing). Repeated semi-structured interviews focussed on different aspects of academics’ writing practices. Heads of departments and administrative staff were also interviewed. Data were coded using the qualitative data analysis software, ATLAS.ti.

Findings

Academics’ ability to succeed in their career was closely tied to their ability to meet quantitative and qualitative targets driven by research evaluation systems, but these were predicated on an unrealistic understanding of knowledge creation. Research evaluation systems limited the epistemic choices available to academics, partly because they pushed academics’ writing towards genres and publication venues that conflicted with disciplinary traditions and partly because they were evenly distributed across institutions and age groups.

Originality/value

This work fills a gap in the literature by offering empirical and qualitative findings on the effects of research evaluation systems in context. It is also one of the only papers to focus on the ways in which individuals’ academic writing practices in particular are shaped by such systems.

Keywords

Citation

Mcculloch, S. (2017), "Hobson’s choice: the effects of research evaluation on academics’ writing practices in England", Aslib Journal of Information Management, Vol. 69 No. 5, pp. 503-515. https://doi.org/10.1108/AJIM-12-2016-0216

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Sharon McCulloch

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Almost every aspect of an academic’s work is mediated by writing, both in terms of the day-to-day tasks that consume their time and in terms of their scholarship over the course of a career. This writing and the practices around it are changing as the demands of academic life have changed. Transformations in higher education, including the introduction of a more managerialist approach, have altered the nature of the writing demands faced by academics (Clarke and Newman, 1997; Deem et al., 2007). One of the most significant of these demands on academics’ writing practices relates to systems for evaluating research quality, which, in the UK, where the current study is located, takes the form of the national research excellence framework (REF). This national exercise in assessing research quality is conducted every five to six years, with the aim of rating research quality in order to enable higher education funding bodies to allocate funding accordingly (REF, 2014). The REF is based on a system of peer review, with research outputs being submitted to a panel of experts, who read them and assign a star rating from one to four to each, four being the highest. The scores for all research submitted in a given department are aggregated and government funding allocated accordingly. Institutions publicise their departments’ scores on the REF in order to demonstrate their quality and attract students. A high score on the REF therefore links to university rankings and league tables, which in turn affect an institution’s ability to attract income from tuition fees.

Given the monetary and reputational value of high REF scores, most universities and departments have strategies aimed at encouraging academic staff to produce work likely to score highly in the REF. These include policies around the numbers of articles or books to be published, the type of publications (i.e. articles or books) academics should produce and publication venues. This can include setting criteria for target (prestigious, high-impact factor) journals and rewarding publication of articles in these.

This paper aims to explore the ways in which systems for evaluating research, including the REF and the institutional strategies it spawns, shape academics’ writing practices in three different universities in England. With reference to extracts from semi-structured interviews with academics, it is argued that these evaluation policies put conflicting pressures on academic writing, limiting the epistemic positions they can hold and choices they can make.

Research evaluation and its effects

Both institutional and individual research excellence are constructs whose meanings are situated within the social and political contexts in which they are used, and both the notion of “excellence” and the means by which it is used to assess research have been extensively critiqued on an ideological level. Moore et al. (2016), for example, have argued that excellence includes a comparative element, and is primarily, therefore, a rhetorical device for claiming value across institutions and disciplines rather than a meaningful way of describing the inherent qualities of any given piece of work. Strathern (1997, 2000) has argued that research audit systems tend to conflate the quality of research itself and the quality of departments or research centres. Similarly, Wilmott (2011) and Burrows (2012) argued that evaluation tools such as journal lists and the h-index have come to enact academic value in ways for which they were never designed.

Empirical research from the field of science and technology studies and policy studies have tended to take a macro-perspective, looking at the effects of research evaluation systems at national or institutional level. For example, a number of studies have compared different national research evaluation systems (Derrick and Pavone, 2013; Reale and Seeber, 2013; Rebora and Turri, 2013) while bibliometric studies have examined the interaction between citation patterns and research evaluation frameworks (Moed, 2005; Bornmann, 2013). At institutional level, Espeland and Sauder (2007) examined the effects of university rankings in law schools in the USA, while a more recent study by Rushforth and De Rijcke (2015) examined the ways in which journal impact factors were used by research teams in two university medical centres in the Netherlands. One of the few studies to focus on arts and humanities researchers was by Hammarfelt and De Rijcke (2015), who combined data regarding publishing practices in Sweden with survey data from one arts faculty to understand how academics were responding to research evaluation policies in Sweden.

A smaller body of literature has attempted to examine the constitutive effects of research evaluation systems from the bottom up, by looking at their influence on the lives of individual academics. Gill (2009) examined the effects of research evaluation on academics’ motivation and well-being, while others have focussed on academics’ attitudes towards evaluative tools such as journal impact factors and the h-index (Aksnes and Rip, 2009; Buela-Casal and Zych, 2012). Less is known, however, about the influence of research evaluation on actual knowledge production (Musselin, 2013), particularly at the level of individual academics’ lived experience and writing practices. One recent exception is Fochler et al. (2016), who investigated the ways that doctoral students and early career researchers in Austria ascribe worth to different aspects of their work, and found that post-docs’ sense of value was closely coupled to dominant research evaluation regimes.

Most of the studies above have focussed on the natural sciences, and little is known about how supposedly transparent research evaluation policies interact with the knowledge creation practices of different disciplines. In particular, few studies have looked at humanities or applied disciplines such as marketing. Furthermore, there is a lack of qualitative studies that might shed light on how individual academics interpret and experience research evaluation in context (Hammarfelt and De Rijcke, 2015). Finally, the ways in which writing practices in particular are shaped by evaluation systems has received very little attention, even though writing is central to what academics do. Writing is not a transparent medium for communicating information, but a highly social activity and site of negotiation of sometimes conflicting sets of priorities and identities (Trede et al., 2012) that lie at the heart of what it means to create knowledge.

This paper aims to explore individual academics’ responses to the multiple demands on their scholarly writing, specifically how their writing and publishing practices are shaped directly and indirectly by research evaluation policies, and how these impact the choices they make about their research writing. The paper draws on qualitative interview data with academics in the different disciplines at universities in England, collected as part of an ESRC-funded project based at Lancaster University. The project examines how knowledge is produced through academics’ writing practices, and how these are shaped by the contemporary context of higher education, including managerial practices and research evaluation systems.

For the purposes of this study, writing is seen as a set of practices that are embedded in social contexts historically located in time and space (Barton, 2007; Hamilton, 2012; Tusting, 2012). This approach acknowledges that academic writing entails what Van Leeuwen (2008, p. 6) calls “socially regulated ways of doing things”, and involves analysing elements of the everyday writing experiences and activities of participants. These included the relationships and collaborations implicated in their writing, the tools and resources they drew on, and the distribution of writing activities across space and time (Lemke, 2000; Nespor, 2007). The results presented in this paper focus mainly on one aspect of these practices, namely, academics’ publishing choices and priorities, and considers how these reflect deeper epistemic values and identities.

Methods

In order to explore how policies around research evaluation might interact with factors such as institutional context and disciplinary cultures to influence academics’ writing, participants were recruited from three different institutions and three different disciplines in England. One of the institutions was a large research-intensive Russell Group university (Russell Group), one was a newer, also research-intensive university, established in the 1960s and located on a green campus outside the nearest town (Plate Glass), and the third was a former polytechnic, a teaching-focussed institution that was awarded university status in 1992 (post-1992).

The disciplines, mathematics, history and marketing, were chosen in order to yield data from a range of different traditions and norms of knowledge production, specifically, one Science, Technology, Engineering and Mathematics discipline, one classic humanities discipline and one more applied discipline.

Participants were initially recruited using a combination of convenience and snowball sampling (Mewburn and Thomson, 2013), whereby informants from the researchers’ professional networks were asked for suggestions for potential participants, who then recommended others via their professional networks. In cases where this did not yield a contact in the target disciplines, academics deemed suitable were contacted directly via their institutional webpage and invited to participate. This yielded a total of 81 interviews, with 49 different academics. All participants were in research-active posts, ranging in seniority from lecturer to professor. The names of universities and individuals have been anonymised and in presenting the data, some identifying details have been changed.

A core group of 16 academics were interviewed three times, with each interview focussing on a different aspect of their writing practices. In order to better understand the effects of material space and resources on their knowledge creation practices, the first of these was a “go-along” interview (Garcia et al., 2012), in which participants gave the researchers a virtual and physical tour of their workplace. The second, techno-biographical, interview (Barton and Lee, 2013) focussed on the participants’ use of digital technologies at different points and in different domains of their lives. Finally, a “day-in-the-life” interview focussed on a specific day in the life of the participants. The remaining 33 academics were interviewed once, with the aim of verifying findings from the core group. These one-off interviews focussed on the academics’ writing practices in general, including the genres of writing they were expected to produce, the resources they drew on to achieve this, and the means by which their writing was evaluated. Interviews were also conducted with administrative staff and heads of departments in order to understand how writing is shared, allocated, counted and evaluated at departmental and faculty level.

The interviews were recorded and transcribed before being anonymised and entered into ATLAS.ti qualitative data analysis software for coding. The coding system was used to categorise the data and tag among other things, instances in which participants talked about different genres of writing or means of evaluating their work. The focus in this paper is on the chain of effects between national REF (research excellence) policies, institutional and departmental policies, and individual academics’ writing practices.

Results and discussion

The results of the project reveal tensions between the forms of knowledge that were valued for the purposes of the national REF and those that were valued by the academics themselves. These had a powerful influence on the choices available to academics about which forms of writing to prioritise and where to publish their research. The effects of research evaluation systems were unevenly distributed across disciplines, institutions and academic age groups, with implications for disciplinary values and career mobility. The following three sections discuss first findings about the close link between academics’ career success and the understanding of knowledge creation that underlies the REF and then the epistemic effects of research evaluation systems in two disciplines. This is followed by an analysis of interview data on the effects of the REF on writing for non-academic audiences.

Narrowing understandings of academic success

Unsurprisingly, this study found that academics’ conditions of probation and promotion were tied to their scholarly writing, which in turn was understood almost exclusively in terms of the REF. Although Emma does not explicitly mention the REF in her comment below, she does echo its terminology in describing the standard she is expected to meet as three-star, which, according to the REF, is “internationally excellent”:

Extract 1

I’m on probation at the moment, a four-year probationary period. During that time I have to publish two papers at three-star

(Emma, Lecturer in Marketing, Russell Group).

Head of the Department Stephanie also invokes the REF when asked what writing she expected the academic staff in her department to be producing:

Extract 2

[…] the minimum that would be required is one good publication a year […] if somebody had one good publication a year, they would be okay for the REF […]. They would probably be okay for promotion and everything else

(Stephanie, Head of Department, Plate Glass).

Asked what she meant by a “good publication”, she talked about expected genre (a journal article), the number of authors (one) and the quality of the journal (the best you could possibly publish in):

Extract 3

[…] what does one good publication a year mean? It’s a difficult matter. So, the prototypical thing would be one single authored paper in one of the best journals you could possibly publish in, and occasionally a book

(Stephanie, Head of Department, Plate Glass).

The UK’s national research evaluation system has become a key factor in evaluating individual academics’ performance, and shaping not only the number of publications academics are expected to produce, but also the forms these should take and the venues they should appear in. Stephanie’s comment above also suggests that the REF is constraining aspects of academics’ writing practices such as the collaborations they can enter into, since single-authored papers are preferred.

The demands of the REF on academics’ writing practices were interpreted and enacted in different ways by different institutions, departments and disciplines. For example, while Stephanie’s department set implicit criteria for assessing quality, drawn from informal, shared understandings of the reputational standing of journals in her disciplinary area, in other cases, most notably in marketing departments, more explicit, externally imposed criteria were used to assess research output.

Every marketing department participating in this study used the Chartered Association of Business Schools’ (ABS) Journal Guide as a means of setting quality criteria for staff publications. This annually published guide ranks journals in the field of business and management, including marketing, using a star rating system that mimics the nomenclature of the REF. Academics were expected to target specific journals, namely, those ranked as three- and four-star according to the ABS guide, and the star rating system employed by ABS was deeply embedded in their discourse about scholarly writing. Every marketing academic interviewed for this study used the star rating terminology of the ABS Journal Guide as a shorthand for talking about their own publications and it loomed darkly over them in terms of determining their success or otherwise as an academic. This was linked partly to the perceived loss of scholarly autonomy that a list of this type carried with it, and partly to a feeling that the targets set by their departments were very difficult to achieve. Charles, who had been in post for nine years, felt that expectations were being becoming harder to meet:

Extract 4

Now back when I started it was “Just get a couple of twos, maybe a couple of threes, if you get included in the REF that’s brilliant”. Now you need, as a junior member of staff or any member of staff in this department, you need to be able to get a four-star journal

(Charles, Senior Lecturer in Marketing, Plate Glass).

The epistemic effects of such narrowly defined understanding of success for the discipline of marketing is discussed in more detail in the section on epistemology below, but it is worth considering for a moment the conceptualisation of knowledge creation being espoused here. Many participants characterised writing as a difficult, creative endeavour not necessarily amenable to being produced to the same standard again and again on demand:

Extract 5

Even when you do get one [a four-star publication], it’s a very creative thing that we do. So how many times has somebody come along with a number one hit and you never hear from them again? You’re asked to continually repeat this

(Charles, Senior Lecturer in Marketing, Plate Glass).

The interviews carried out for this study revealed a picture of academic writing as involving long hours struggling with data, with many doubts, revisions and rejections along the way. The dynamics of knowledge creation as actually experienced by academics may be an untidy process involving ups and downs, disappointments and false starts, rather than a flat line of constant high performance. One potential consequence of the pressure to consistently produce “world-leading” journal articles, without the failures and dead ends that knowledge work actually entails, as others have pointed out (Martin and Whitley, 2010; Fochler et al., 2016), is that academics may shy away from risky or innovative areas of research for fear of it either leading them into a cul-de-sac from which publications do not directly ensue, or being rejected by top-ranking journals.

Not only is the nature of academic writing itself probably messier and less than conducive to the demands of performance targets than implied by the REF, but the conditions under which any academics are expected to produce this writing may also be less than favourable. In the post-1992 institution in this study, most academics had relatively high teaching loads and did not always enjoy a culture in which research writing was valued. One head of department in this university described research as “the icing on the cake”, but nevertheless acknowledged that “everyone does it”. Mark describes a situation below where high-impact scholarly publications are expected, but not prioritised in his workload:

Extract 6

I don’t get any hours for writing. I don’t get any hours for research whatsoever. So basically, unless your work is at least three-star, four-star, then you don’t get any hours for it because although it’s two-star material and it is REF-able, they’re only interested in three- and four-star

(Mark, Lecturer in marketing, Post-1992).

Although Mark’s university is a teaching-focussed institution, he is nevertheless expected to do research, but time is not allocated for this unless research of three- or four-star quality has already been produced, presumably in one’s own time. The obvious paradox is that without the time to write, academics are unlikely to be able to produce top quality publications.

One effect of the way the REF is interpreted in different contexts is that career mobility for academics at teaching-intensive institutions may be curtailed. One such academic explains that the set of standards applied to his writing in his current department would not enable him to move to a more research-intensive university. Again the irony is that unless one is already producing “excellent” research, one is hardly in a position to produce more of the same:

Extract 7

We had a kind of research meeting a while ago and they were saying well, just get stuff out there. It doesn’t matter whether it’s two or whether it’s even one […]. But I wouldn’t necessarily govern my writing by that because I think that in order to move I need to demonstrate that I’m in three- and four-star journals. I don’t think that one- and two-star would hold much weight if I wanted to, say, go to [a research-intensive university], for example

(Rory, Senior Lecturer in Marketing, Post-1992).

The institutions and departments participating in this study linked their staff’s working conditions, including probation and promotion, to performance targets closely linked to the REF. The criteria for judging success were widely perceived by academics to be inappropriate for both practical and epistemic reasons.

Those working in less research-intensive universities faced the paradox that while research was expected, it was not prioritised, so they were not given the time necessary to reach the expected level of performance. The understanding implicit in academics’ complaints in this regard is that writing high-quality journal articles takes time. The participants in this study saw the process of knowledge creation through academic writing as creative, intellectually laborious and characterised by highs and lows. This is at odds with the conceptualisation of knowledge creation that underpins the research evaluation systems by which their performance is evaluated. The latter suggests that scholarly writing can be squeezed into gaps between teaching and produced to a uniformly high standard throughout not only the rhythms of the academic year, but throughout an entire academic career, from day one.

Epistemology and research evaluation

Research evaluation systems interacted in different ways at the level of discipline as well as institution, particularly for history and marketing. This most obviously influenced the value placed on certain genres of writing and venues for publication, but had knock-on effects for the way disciplinary knowledge was conceptualised and the nature of the research that was made possible.

Academics in all three disciplines talked about peer-reviewed journal articles as their main currency when it came to the REF. This was a source of contention in history, where monographs are traditionally the most highly prized genre. According to Harley et al.’s (2010) survey of 160 academics in seven disciplinary fields in the USA, history is a “book-based field” in which the scholarly monograph is the gold standard for publication (Harley et al., 2010, p. 293). Historians in this study also described monographs as their most valued form of knowledge creation, but one that was under pressure due to the need to produce enough publications in each REF cycle, as illustrated:

Extract 8

[The monograph] is regarded as the core part of our discipline, and what it is to write history, and to do something creative with our discipline, is under attack, because people don’t appreciate the amount of work that goes into it, the length of time it takes

(Rebecca, Lecturer in history, Plate Glass).

The finding that genres favoured by research evaluation systems may not align with those most valued by academics themselves is consistent with previous studies (Laudel and Gläser, 2006; Nygaard, 2017; Hammarfelt and De Rijcke, 2015). However, it is arguably not the genre itself that is a key here, but the epistemic meaning ascribed to it. Rebecca’s comment shows that writing is seen as something creative and time-consuming, and this particular form of it as something foundational to the discipline itself.

Pressure to publish journal articles was not the only reason why the monograph was perceived as under threat. The ideologies that accompany narrow measures of research evaluation can also generate forms of writing that take precious time away from what academics see as core disciplinary work. Many participants had targets for income generation written into their contracts, and thus spent a lot of time writing grant applications. One historian described being unable to devote sufficient time to writing a book because of writing funding applications:

Extract 9

So at the moment I’m working on a book and I’d really like to devote 100% of my research time to that but in the institutional culture, we’re under pressure to meet targets for grant applications and other projects

(Alex, Senior lecturer in history, Russell Group).

History was not the only discipline in which the REF and its interpretation were perceived as having undesirable epistemic effects. In marketing, the use of target journal lists was experienced as pushing the very boundaries of the discipline. Most top-ranking marketing journals (according to the ABS Journal Guide) are based in the USA, and there was a widespread belief among the UK-based academics in the current study that this was a barrier to their access:

Extract 10

[…] it’s becoming harder and harder and harder in management and certainly marketing. So for marketing, I can’t get four-star marketing because I don’t live in America and I haven’t got an American accent and I don’t use American English. It’s no good using spellcheck to change it; there’s a different way of talking, which gets picked up and gets kicked out, right?

(Diane, Professor in Marketing, Plate Glass).

The issue here is not just one of “accent” but a deeper epistemological issue related to the way knowledge in the discipline is understood and validated. The “way of talking” Diane describes includes the tendency of these American journals to publish mainly quantitative work. This constitutes a fundamentally different way of seeing the world from her own. She went on to say, “I’m not a positivist. I don’t do modelling. I have no way of engaging with that world”. Whether one is a “positivist” or not does not simply concern the research instruments one employs; it reflects an entire methodological paradigm, a particular view on what counts as knowledge, which in turn shapes the questions one poses, the interpretations one makes, and the positions one adopts in relation to world. Despite the epistemological gulf between who Diane saw herself to be and what her department demanded that she do, she, like other participants, saw these targets as unavoidable, and tried to shape her writing around them, even if this meant changing her research in ways that threatened her sense of identity as a scholar:

Extract 11

So because of the research I do, I could either move department […] or I can do it another way. Now I target management journals, which is one way of hitting a four-star […]. You can get published in top rated medical journals, so it’s even influenced the setting when researching because you’ve got to play the game otherwise you’re nobody. You get trampled on

(Diane, Professor in Marketing, Plate Glass).

Diane’s comments not only echo Burrows’ (2012) sentiment that metrics aimed at evaluating research force academics to “play or be played”, they also reflect a deep epistemic unease. Other participants talked of the “death of marketing in the UK”. The use of target journal lists affects not only the final venue for the publication of research, but also risks changing where scholars in applied fields locate their research, and squeezing out smaller-scale qualitative studies simply because they do not fit into the ever-narrowing definition of “excellent”.

Evaluation of writing for different audiences

The main locus of pressure from research evaluation systems discussed so far is scholarly writing by academics, primarily for other academics. This section will consider the tensions between scholarly writing for peer-reviewed journals, and other forms of knowledge creation aimed at non-academic audiences. Although the REF itself does not take journal quality ratings into account, the criteria used by institutions to determine whether staff are producing REF-able publications tend to be based on the perceived quality of the venue of publication, which in turn is determined primarily based on citation metrics. Thus, the REF and local interpretations of it push academics to write the sort of texts that are primarily aimed at an academic audience, and whose influence can be measured in citations. However, the REF also included for the first time in 2014 the notion of impact, allowing for 20 per cent of the REF score to be accounted for by the impact that research makes beyond academia. According to the UK’s Higher Education Funding Council, “impact” is defined as “[…] an effect, change or benefit beyond academia, in areas such as the economy, environment, policy, culture, health, or society at large” (Higher Education Funding Council for England, 2016) (emphasis added).

This understanding of impact explicitly refers not to influence within the disciplinary community in the form of citations, but to engagement with professional or lay communities. In order to reach audiences beyond the academy, academics have to communicate their findings, and arguably their research intentions to those who, in all likelihood, do not read high-impact academic journals. Rather, achieving impact might entail writing for non-academic audiences in the form of reports, policy recommendations, websites, blogs, exhibition catalogues, articles for trade journals and the like. However, such non-traditional genres of academic writing are often not perceived to meet the criteria departments have in mind when they talk about a track record of “good publications”. Thus, there exists a tension between the need to produce writing that counts in terms of an academic’s career success and writing that demonstrates the societal relevance of their research.

Asked about impact-related writing for non-academic audiences, many participants expressed enthusiasm about the potential and principle behind these genres, but engaged in them to a limited extent because of their perceived lower value in the eyes of their institutions. David, a Mathematician, describes his views on blogs and other forms of grey literature:

Extract 12

A lot of the work is grey literature where people have written blog pieces. I think that’s opened my eyes to what’s possible in that area but yes, if there’s time – I think it’s always a question of time. Again, that work is not valued by the university as far as I can see

(David, Professor in Mathematics, Plate Glass).

Despite his enthusiasm for these emerging forms of scholarship, David points to several barriers to this sort of writing; lack of time and a sense that it is not valued at institutional level. Given the apparently increasing pressures academics are under to produce three- and four-star publications, it is unsurprising that they feel pushed for time. David’s perception that grey literature is not valued by his university may appear to be at odds with the fact that impact has been formally incorporated into the REF, a research evaluation system that universities in the UK take extremely seriously. However, it is in keeping with previous findings from Watermeyer (2015) that such work is seen as a fringe activity and Harley et al. (2010, p. 8) that “edited volumes, critical editions, exhibitions, dictionary/encyclopaedia entries, software […] do not count for much” unless the publications in high-impact peer-reviewed academic journals are already in place. Furthermore, according to HEFCE, impact must be underpinned by research produced during the REF period, thus it is directly related to specific research outputs submitted to the REF (Higher Education Funding Council for England, 2011, p. 27). Thus, forms of writing directed at the general public are valued only when directly linked to scholarly writing directed at academic audiences, with which they must originate.

Historian Colin also held an ambivalent position when asked about impact-related writing, on the one hand, expressing commitment to public good, and on the other, seeing this as something of an optional extra:

Extract 13

The university is committed to something called social responsibility. Well, I am very happy to sign up to that […] It’s just that it is extra and it’s quite demanding, and I wouldn’t like it to take over my writing life

(Colin, Professor in History, Russell Group).

Colin specialised in an area of post-war history that had again become something of a hot topic due to events in the news, and he had been approached by several newspapers to write a piece on this issue. He talked positively about the greater reach he could achieve by writing something for the popular press than he could by writing a scholarly paper that would be read by a handful of academics. He was keen to influence the debate this way, but nevertheless perceived this sort of public engagement as secondary to more central disciplinary forms of knowledge creation. This is similar to findings by Felt et al. (2016, p. 753), who found that outputs aimed at non-academic audiences were seen by academics as, at best, “also valuable”. In line with Fochler et al.’s (2016) conclusions about the role of teaching and learning in the academic lives of post-docs, the findings discussed here point to a form of knowledge relations whereby non-academic knowledge creation is valued only to the extent that it does not impede the steady production of “world-leading” knowledge as conceptualised in the narrow terms of citation-linked research evaluation practices.

Those academics in the current study who did engage in writing aimed at non-academic audiences did so only after prioritising the forms of writing that mattered most for the purposes of probation and promotion. The data also lend support to Harley et al.’s (2010) finding that writing directed at impact and public engagement is considered more appropriate at some stages of an academics’ career than others. Robert, a Professor in his 60s, described writing a maths book aimed at children:

Extract 14

It’s not exactly something that you would encourage a starting lecturer to do because there are just too many things and you’ve got to establish yourself in various ways. Once you’ve reached a certain age, it’s not a bad thing to be thinking about explaining maths

(Robert, Professor in Mathematics, Russell Group).

Robert’s comments suggest that more established scholars may enjoy a greater degree of freedom from the pressures of the REF. The secure status of established academics, who have long since passed their probation, and who no longer need to apply for promotion, may free them from the imperatives of evaluative measures that will be used to assess their performance. This is consistent with previous findings of a “generation gap” in academia, whereby younger researchers are more constrained by the effects of research evaluation systems because they have to operate in a competitive market for permanent posts (Hammarfelt and De Rijcke, 2015; Fochler et al., 2016). In this sense, positions of resistance or non-compliance with the imperatives of research evaluation systems may be available only to certain groups of older, more established academics.

Conclusions

This study has shown that writing is not only a key activity in the day-to-day business of being an academic, but also a means by which academics’ professional competence is assessed. To have a certain number of publications of specified quality is not only to be “REF-able”, but also to be employable and promotable, and, ironically, to gain access to the time and support necessary to facilitate the production of good quality research.

In order to succeed on the terms dictated by research evaluation systems driven by the REF, academics in England are forced to align their practices with a neoliberal culture that fundamentally misunderstands the nature of the scholarly writing process as an easily reproducible technical skill rather than a difficult, creative and rather unpredictable endeavour.

This study also revealed that academics’ writing efforts were directed mainly towards publishing in high-impact journals, attracting citations and generating grant income at the expense of other forms of knowledge creation because these activities were key to defining their success in terms of the REF. Certain genres and publication venues were valued over others, but at a deeper level, research assessment regimes also shaped “what can be talked about and how valuations of academic worth are being made” (De Rijcke et al., 2016, p. 165). Specifically, at disciplinary level, the research paradigms that were available, the settings in which research could be conducted and the disciplines to which academics belonged were all called into question. One implication of this might be that, in addition to avoiding risky or innovative research, the value of qualitative studies, particularly in marketing, is eroded.

Another effect of the ways that institutions in this study interpreted the REF was that a tension emerged between writing that counts in terms of academics’ career progression and a writing that might be valuable in the broader sense of contributing to the public good (Felt et al., 2016). Impact, although it is formally part of the UK’s national REF, and is valued by academics in principle, is seen as something of a luxury only to be indulged in if time allows since writing for public engagement and impact contributes little to achieving the prized identity category of “REF-able”. This view is enacted not only in terms of which tasks are prioritised in busy working days, but also in terms of which genres of writing academics pursue at different stages of their career. The freedom to write for non-academic audiences was enjoyed to a greater extent by older, more established academics.

A generation gap of sorts also emerged in terms of choices to opt out of chasing top-ranking publications. Resisting pressure to produce three- and four-star publications, even where one’s institution did not demand these, was seen as a career-limiting option. An implication of this is that there is a risk of a two-tier system developing, where some academics become trapped in teaching-intensive roles since they are not enabled to engage in the kind of knowledge creation work that would enable them to be mobile. This is a particular issue for younger academics who may start their careers in less research-intensive institutions with the hope of establishing a research trajectory over time.

The demands of the REF and the internal policies that institutions put in place in response to it are shaping academics’ writing practices in contradictory ways, since the definition of success engendered therein excludes many valued knowledge creation practices and limits the options available to academics in carving out their scholarly niche.

It is difficult to disentangle every source of change in academics’ writing, since it is under pressure from many directions. Preferred genres of writing may be changing in response to other factors as well as research evaluation measures. Digitisation in general and the changing nature of academia in which academics’ visibility is seen as increasingly important have undoubtedly contributed to expectations that academic staff blog, tweet and engage in emerging genres of semi-scholarly writing. Nevertheless, it is clear that research evaluation practices have important effects on academics’ writing priorities and choices. These choices are, of course, not really choices at all, since writing towards REF-driven targets is something academics have to do not only in order to progress in their career, but also to keep their current job and avoid sanctions.

References

Aksnes, D.W. and Rip, A. (2009), “Researchers’ perceptions of citations”, Research Policy, Vol. 38 No. 6, pp. 895-905.

Barton, D. (2007), Literacy: An Introduction to the Ecology of Written Language, 2nd ed., Blackwell, Oxford.

Barton, D. and Lee, C. (2013), Language Online: Investigating Digital Texts and Practices, Routledge, London and New York, NY.

Bornmann, L. (2013), “The problem of citation impact assessments for recent publication years in institutional evaluations”, Journal of Informetrics, Vol. 7 No. 3, pp. 722-729.

Buela-Casal, G. and Zych, I. (2012), “What do the scientists think about the impact factor?”, Scientometrics, Vol. 92 No. 2, pp. 281-292.

Burrows, R. (2012), “Living with the h-index? Metric assemblages in the contemporary academy”, The Sociological Review, Vol. 60 No. 2, pp. 355-372.

Clarke, J. and Newman, J. (1997), The Managerial State: Power, Politics and Ideology in the Remaking of Social Welfare, Sage, London.

Deem, R., Hillyard, S. and Reed, M. (2007), Knowledge, Higher Education and the New Managerialism: The Changing Management of UK Universities, Oxford University Press, Oxford.

De Rijcke, S., Wouters, P., Rushforth, A., Franssen, T.P. and Hammarfelt, B. (2016), “Evaluation practices and effects of indicator use – a literature review”, Research Evaluation, Vol. 25 No. 2, pp. 161-169, doi: 10.1093/reseval/rvv038.

Derrick, G. and Pavone, V. (2013), “Democratising research evaluation: achieving greater public engagement with bibliometrics-informed peer review”, Science and Public Policy, Vol. 40 No. 5, pp. 563-575, doi: 10.1093/scipol/sct007.

Espeland, W.N. and Sauder, M. (2007), “Rankings and reactivity: how public measures recreate social worlds”, American Journal of Sociology, Vol. 113 No. 1, pp. 1-40.

Felt, U., Igelsböck, J., Schikowitz, A. and Völker, T. (2016), “Transdisciplinary sustainability research in practice: between imaginaries of collective experimentation and entrenched academic value orders”, Science, Technology, & Human Values, Vol. 41 No. 4, pp. 732-761, doi: 10.1177/0162243915626989.

Fochler, M., Felt, U. and Müller, R. (2016), “Unsustainable growth, hyper-competition, and worth in life science research: narrowing evaluative repertoires in doctoral and postdoctoral scientists’ work and lives”, Minerva: A Review of Science, Learning and Policy, Vol. 54 No. 2, pp. 175-200, doi: 10.1007/s11024-016-9292-y.

Garcia, C.M., Eisenberg, M.E., Frerich, E.A., Lechner, K.E. and Lust, K. (2012), “Conducting go-along interviews to understand context and promote health”, Qualitative Health Research, Vol. 22 No. 10, pp. 1395-1403.

Gill, R. (2009), “Breaking the silence: the hidden injuries of neo-liberal academia”, in Flood, R. and Gill, R. (Eds), Secrecy and Silence in the Research Process: Feminist Reflections, Routledge, London, pp. 228-244.

Hamilton, M. (2012), Literacy and the Politics of Representation, Routledge, London.

Hammarfelt, B. and De Rijcke, S. (2015), “Accountability in context: effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the Faculty of Arts at Uppsala University”, Research Evaluation, Vol. 24 No. 1, pp. 63-77.

Harley, D., Acord, S.K., Earl-Novell, S., Lawrence, S. and Judson King, C. (2010), Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines, Center for Research Evaluation (BU), Los Angeles, CA and London, available at: https://escholarship.org/uc/item/15x7385g (accessed 9 December 2016).

Higher Education Funding Council for England (2011), “Assessment framework and guidance on submissions”, available at: www.hefce.ac.uk/research/ref/pubs/2011/02_11/ (accessed 12 December 2016).

Higher Education Funding Council for England (2016), “REF impact”, available at: www.hefce.ac.uk/rsrch/REFimpact/ (accessed 9 December 2016).

Laudel, G. and Gläser, J. (2006), “Tensions between evaluations and communication practices”, Journal of Higher Education Policy and Management, Vol. 28 No. 3, pp. 289-295.

Lemke, J.L. (2000), “Across the scales of time: artifacts, activities, and meanings in ecosocial systems”, Mind, Culture, and Activity, Vol. 7 No. 4, pp. 273-290.

Martin, B.R. and Whitley, R. (2010), “The UK research assessment exercise: a case of regulatory capture?”, in Whitley, R., Gläser, J. and Engwall, L. (Eds), Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and their Consequences for Intellectual Innovation, Oxford University Press, Oxford, pp. 51-80.

Mewburn, I. and Thomson, P. (2013), “Why do academics blog? An analysis of audiences, purposes and challenges”, Studies in Higher Education, Vol. 38 No. 8, pp. 1105-1119.

Moed, H.F. (2005), Citation Analysis in Research Evaluation, Springer, Dordecht.

Moore, S., Neylon, C., Eve, M.P., O’Donnell, D.P. and Pattinson, D. (2016), “‘Excellence R Us’: university research and the fetishisation of excellence”, Figshare, available at: https://dx.doi.org/10.6084/m9.figshare.3413821.v1 (accessed 7 December 2016).

Musselin, C. (2013), “How peer review empowers the academic profession and university managers: changes in relationships between the state, universities and the professoriate”, Research Policy, Vol. 42 No. 5, pp. 1165-1173.

Nespor, J. (2007), “Curriculum charts and time in undergraduate education”, British Journal of Sociology of Education, Vol. 28 No. 6, pp. 753-766.

Nygaard, L.P. (2017), “Publishing and perishing: an academic literacies framework for investigating research productivity”, Studies in Higher Education, Vol. 42 No. 3, pp. 519-532, doi: 10.1080/03075079.2015.1058351.

Reale, E. and Seeber, M. (2013), “Instruments as empirical evidence for the analysis of higher education policies”, Higher Education, Vol. 65 No. 1, pp. 135-151.

Rebora, G. and Turri, M. (2013), “The UK and Italian research assessment exercises race to face”, Research Policy, Vol. 42 No. 9, pp. 1657-1666.

REF (2014), “About the REF”, available at: www.ref.ac.uk/about/ (accessed 4 December 2016).

Rushforth, A. and De Rijcke, S. (2015), “Accounting for impact? The journal impact factor and the making of biomedical research in the Netherlands”, Minerva: A Review of Science, Learning and Policy, Vol. 53 No. 2, pp. 117-139, doi: 10.1007/s11024-015-9274-5.

Strathern, M. (1997), “‘Improving ratings’: audit in the British university system”, European Review, Vol. 5 No. 3, pp. 305-321.

Strathern, M. (Ed.) (2000), Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy, Routledge, London.

Trede, F., Macklin, R. and Bridges, D. (2012), “Professional identity development: a review of the higher education literature”, Studies in Higher Education, Vol. 37 No. 3, pp. 365-384.

Tusting, K. (2012), “Learning accountability literacies in educational workplaces: situated learning and processes of commodification”, Language and Education, Vol. 26 No. 2, pp. 121-138.

Van Leeuwen, T. (2008), Discourse and Practice: New Tools for Critical Discourse Analysis, Oxford University Press, Oxford.

Watermeyer, R. (2015), “Lost in the ‘third space’: the impact of public engagement in higher education on academic identity, research practice and career progression”, European Journal of Higher Education, Vol. 5 No. 3, pp. 331-347.

Wilmott, H.C. (2011), “Journal list fetishism”, Organization, Vol. 18 No. 4, pp. 429-442.

Acknowledgements

The author and research team (Karin Tusting, David Barton, Ibrar Bhatt, and Mary Hamilton) would like to thank all those who participated in the research and gave their time so generously. Without them, this research would not have been possible. The author would also like to acknowledge the Economic and Social Research Council, which has funded the research [award No. ES/L01159X/1].

Corresponding author

Sharon Mcculloch can be contacted at: s.mcculloch@lancaster.ac.uk

About the author

Sharon Mcculloch is an Associate Lecturer in the Department of Linguistics and English Language at Lancaster University, and a Senior Teaching Fellow at the UCL, both in the UK. Her research interests are in literacy practices, as they pertain to both students and professional writers in higher education. She is particularly interested in the relationship between academic reading and writing, and in how they relate to knowledge production.

Related articles