Preface

Case Study Evaluation: Past, Present and Future Challenges

ISBN: 978-1-78441-064-3, eISBN: 978-1-78441-063-6

ISSN: 1474-7863

Publication date: 3 January 2015

Citation

Simons, H. (2015), "Preface", Case Study Evaluation: Past, Present and Future Challenges (Advances in Program Evaluation, Vol. 15), Emerald Group Publishing Limited, Leeds, pp. ix-xv. https://doi.org/10.1108/S1474-786320140000015011

Publisher

:

Emerald Group Publishing Limited

Copyright © 2015 Emerald Group Publishing Limited


Case study has a long tradition in several social science disciplines and professional fields, a tradition which it is important and useful to acknowledge. However the use of case study in evaluation, the prime focus of this volume, has a more recent history. It is timely to review the reasons for its evolution and usefulness in evaluating social, health and educational programmes and renew its advocacy. For the intrinsic worth of case study in evaluation has been overshadowed in current times by methodologies that promise greater ‘certainty’ and demonstration of impact on narrow measures of worth. A study of the singular, the particular, the unique, deserves re-consideration as a means of understanding the complexity of programmes and policies in turbulent political and social contexts.

The evolution of case study evaluation took place in the contemporary context of programme and project evaluation in the late sixties and seventies to explore and document the complex and unique experiences of major curriculum innovations (Simons, 1971, 1980). Earlier models had failed to capture the complexity of these programmes in action, ignored the agency of those implementing them and did not account for the uniqueness of context and place. Hence they were unable to provide relevant evidence and judgements of worth to inform programme development or offer an adequate basis for future policy determination.

What case study evaluation was able to do was to get close to the experience of the people who were implementing innovative programmes, to explore how these were interpreted in practice, charter their development, and document effects in the particular socio-political context in which they occurred. The picture that emerged was at once more complex than that provided through earlier methodologies, more relevant to issues important to people in the programme, and more authentic, grounded, as it was, in the experience of the programme in action. Most importantly, the findings were ‘interpreted in context’ (Cronbach, 1975) and that was often culturally, institutionally and regionally diverse.

An early book on case study in educational evaluation, Towards a Science of the Singular, (Simons, 1980), while acknowledging antecedents in different disciplines and professions, especially in methods, explored the particular logic of case study in evaluation and educational contexts. This highlighted the essential characteristics of case study evaluation and also the potential problems of ethics and reporting that such close up studies of people, programmes and policies in socio-political contexts would incur.

It is important to underscore here why it was necessary to examine case study logic in the particular context of evaluation as the purpose of evaluation is different from research in other disciplines. It is to establish the worth and value of something, whether of a project, programme, policy or institution; and it is inherently political. It addresses the questions of who gets to see what about whom, in what circumstances and when, and what the consequences are for different people of the policies and programmes evaluated. Given this political dimension, it has to be responsive to different perspectives and interests, including and balancing them fairly, and facilitate use – a major criterion of evaluation.

Three particular features of case study that contribute to its utility in programme and policy evaluation are the opportunities it offers to represent multiple perspectives of a programme or policy, to engage participants in identifying issues and interpreting the case, and to provide a rich, contextual picture of what transpired in the field from which policy makers, practitioners and the general public can learn. In other words it is accessible on several levels.

Given its openness and flexibility (constrained neither by method nor time), case study evaluation also allows evaluators to observe close interactions between participants and stakeholders in the programme and, given appropriate ethical protocols, to manage power relations within the case in the interest of conducting and reporting a study that is fair and just. This presupposes a particular model of democratic evaluation (MacDonald, 1976; Simons, 1987) but it is one that Kushner (2000) has noted is a ‘natural fit’ with case study methodology.

For the most part in the early turn to case study evaluation, traditional qualitative methods – interview, observation, document analysis – were employed, signalling a shift in the epistemological basis of the ways we can come to know and understand social and educational programmes. But these days, the scope has widened to include visual, narrative and digital methods. Case study may be conducted using quantitative methods and from different standpoints, of course – different ways of knowing and understanding require different methods. However the major emphasis in this volume is on the form of understanding that the move to qualitative methods promoted. The critical factor is the case. It is for evaluators to choose which methods are appropriate to address the evaluation questions they are exploring in a particular case.

This turn to case study evaluation coincided with the growing ‘quiet revolution’ in qualitative research that was occurring at around the same time (Denzin & Lincoln, 1994) providing further methodological support for different ways of knowing in case study evaluation.

Much of this development took place in the field of educational evaluation. However not long after, in the eighties, we saw an expansion in the fields of health and social care: in medicine (Greenhalgh & Worrall, 1997); nursing (Zucker, 2001); and social work (Shaw & Gould, 2001). These fields also recognised that what matters in evaluating programmes and policies are context, complexity, culture and people in particular socio-political contexts. Several papers in this volume demonstrate how these elements are represented in practice and the difficulties associated with them.

In recent years in a changing political and economically constrained context, case study evaluation has been less prevalent as the sole approach to evaluating programmes and policies. The rise of ‘mixed methods’ evaluation has been one factor in this context. But the more significant detractor has been the resurgence of belief at a political level of randomised controlled trials as the sine qua non of what should count as evidence to influence policy. Another way of putting this is to say, there is distrust in the capacity of evaluation from single cases to provide a safe basis to inform policy. This is not true, but it is a commonly held belief.

In the mixed methods debate, it is also not clear whether a case study approach is awarded an equal epistemological basis with other methodologies. Or whether it is simply seen as a context in which data gained from other methods are interpreted or adopted on the assumption that more than one methodology gives the evaluation more validity. Mere adoption of several methods does not increase validity however (though this is also a commonly held belief); it depends upon how the different methods are combined or integrated and what questions they each address.

There are several issues that worry recipients, readers and commissioners of case study evaluation on which I wish to briefly comment. The first is generalisation. As I indicated above it is not true that you cannot generalise from case study evaluation even though the case/s are particular. There are several ways in which one can do this (Simons, 2009). However it is not generalisation in a propositional sense (see Flyvberg, 2006; Simons et al., 2003; Stake, 1978). All retain a connection with the context in which the generalisation first arose. However the overriding potential and usefulness of case study is particularisation (Simons, 2014; Stake, 1995) as it is this, the specific detail and experience in the case, from which we so often learn.

Then there is the issue of story. Stories are the natural way in which we learn, as Okri (1997) has said, an observation worth following if we want people to take notice of our case study evaluations. In evaluation, story can be seen in several senses – in the underlying narrative structure of what was learned in the case (House, 1980), as data – short stories – and as a method of communication – telling what was learned in story form. Writing this story is no easy task. For the story of an evaluation to communicate, it needs to have a strong narrative structure, be well written and grounded in the realities of the case. It also needs to demonstrate the worth of the programme or policy, and be sensitively portrayed, especially when key protagonists and volatile socio-political contexts may be identified.

This last feature highlights the ethical dimension of case study work and the necessity for strong ethical protocols to guide collection, analysis and reporting that are endorsed by all stakeholders. In evaluation, such protocols need to include, beyond the usual ethical procedures for protection of persons, those that address the political dimension of evaluation, that is, that do not privilege any one interest group or allow anyone to dominate and that ensure that all relevant interests and perspectives are represented, especially those of the least powerful in the case and the most disadvantaged in society.

Strongly related to the story of the case is the potential in case study evaluation for narratives of key protagonists. Here I do not mean only those who are implementing the programme or policy, important though this is, given it is people who interpret and enact the policies and programmes in practice (Kushner, 2000; MacDonald, 1977). Equally important are narratives of those who generate the policy or programme and those who commission the evaluation. However, such portrayals are rarely seen in case study evaluations.

Narratives of evaluators are also less common and maybe rightly so. It is important to acknowledge the values and perspectives of the evaluator and that s/he is an inevitable part of the frame. However in case study evaluation a balance has to be struck between the boundaries of the case and the boundaries of self and case, what is and what is not legitimate and appropriate to explore and to share in a case study of a publicly funded programme or policy.

On the question of reporting, case study evaluation has huge potential for communicating in ways that match how people learn, to promote the likelihood that they will engage with the findings. I have already mentioned stories and narratives of the case and within a case study narrative, there can be closely observed episodes, critical incidents, dialogues and cameos of individuals. Depending upon the ethical clearance possible, all kinds of visual forms such as photographs, video diaries or video clips of the story of the case can enhance access and understanding, though here too the narrative structure of the case needs to be preserved. And, given our digital age, there is massive scope for presenting complex quantitative and qualitative evidence from case study evaluation embedded in context in a few slides or a short CD. Long written reports, so often criticised as a problem for case study, are no longer a practical objection.

In recent years, there has been a growing interest, albeit perhaps not reaching too many people, for engaging with a variety of artistic forms – poetry, drama, collage, drawing – in the gathering and analysis of data and in reporting findings (Liamputtong & Rumbold, 2008; Simons & McCormack, 2007). Easier to adopt in reporting than analysis, these forms have been utilised more often in evaluations in professional practice contexts of education and health care than in policy environments. Yet they are potentially relevant in policy contexts too for communicating and enhancing understanding of the case. It may always be the case that the written word will prevail, but demonstrating the worth of the case artistically and creatively has much to recommend if our audiences are prepared for this way of seeing.

Whichever angle we take to report, and I appreciate that in some policy contexts we may be obliged to report more conventionally, looking ahead there are a number of things we can do to persuade our audiences of the value of case study evaluation. First we need good examples of cases that capture the intricacies of the case, that demonstrate with ‘thick description’ and closely observed incidents and dialogue, the reality of what transpires in the field. Second, we need to see narratives of people in the case that document how it is for them and what issues they think are important. Third, we need to find ways of portraying the different and interweaving contexts in the case at different levels to show the complexity of programme and policy implementation. Finally, as indicated above, we need more imaginative ways of reporting what we learn from cases which match and challenge the ‘vocabulary of action’ (House, 1973) of policymakers, practitioners and citizens.

It is now fifty years since contemporary evaluation was recognised as a legitimate field of study, and the justification for case study evaluation is clearly established. However, it is not yet mainstream and, as I indicated at the beginning, it is in danger of being overshadowed by methodologies that promise greater ‘certainty’. Whereas what case study evaluation does (and this I regard as a strength) is to challenge that certainty, to open up possibilities for understanding in different ways. It gives agency to those in positions of responsibility to engage with the issues in the case to inform actions, improve practice, develop policy. As I have argued before: ‘To live with ambiguity, to challenge certainty, to creatively encounter, is to arrive, eventually, at “seeing” anew’ (Simons, 1996, p. 38).

This is the power and promise of case study evaluation. It is a challenge to traditional ways of evaluating social, health and educational programmes/policies and we may not yet have fulfilled such a promise. More examples are needed that portray the reality of the programmes we evaluate – the interface of people and politics – that clearly establish the value of the programme or policy and that communicate in ways our audiences can readily apprehend. It is a huge challenge. But if we are to realise two of the major criteria of evaluation, those of utility and credibility, to persuade people to act on the findings, it is a challenge worth taking.

References

Cronbach (1975) Cronbach, L. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116127.

Denzin & Lincoln (1994) Denzin, N. K. , & Lincoln, Y. S. (Eds.). (1994). The handbook of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.

Flyvberg (2006) Flyvberg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219245.

Greehalgh & Worrall (1997) Greenhalgh, T. , & Worrall, J. G. (1997). From EBM to CSM: The evolution of context-sensitive medicine. Journal of Evaluation in Clinical Practice, 3(2), 105108.

House (1973) House, E. R. (1973). The conscience of educational evaluation. In E. R. House (Ed.), School evaluation: The politics and process (pp. 125135). Berkeley, CA: McCutchan.

House (1980) House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: Sage.

Kushner (2000) Kushner, S. (2000). Personalizing evaluation. London: Sage.

Liamputtong & Rumbold (2008) Liamputtong, P. , & Rumbold, J. (Eds.). (2008). Knowing differently: Arts-based and collaborative research methods. New York, NY: Nova Science Publishers, Inc.

MacDonald (1976) MacDonald, B. (1976). Evaluation and the control of education. In D. Tawney (Ed.), Curriculum evaluation today: Trends and implications. Schools council research studies (pp. 125136). London: Macmillan.

MacDonald (1977) MacDonald, B. (1977). The portrayal of persons as evaluation data. In N. Norris (Ed.), Safari 2: Theory in practice (Vol. 4, pp. 5067). Occasional Publications. Norwich: University of East Anglia, Centre for Applied Research in Education.

Okri (1997) Okri, B. (1997). A way of being free. London: Phoenix.

Shaw & Gould (2001) Shaw, I. , & Gould, N. (2001). Qualitative research in social work: Context and method. London: Sage.

Simons (1971) Simons, H. (1971). Innovation and the case study of schools. Cambridge Journal of Education, 3, 118123.

Simons (1980) Simons, H. (Ed.). (1980). Towards a science of the singular: Essays about case study in educational research and evaluation. Occasional Papers No. 10. Norwich, UK: Centre for Applied Research, University of East Anglia.

Simons (1987) Simons, H. (1987). Getting to know schools in a democracy: The politics and process of evaluation. Lewes: The Falmer Press.

Simons (1996) Simons, H. (1996). The paradox of case study. Cambridge Journal of Education, 26(2), 225240.

Simons (2009) Simons, H. (2009). Case study research in practice. London: Sage.

Simons (2014) Simons, H. (2015). Case study research: Indepth understanding in context. In P. Leavy (Ed.), The Oxford handbook of qualitative research. New York, NY: Oxford University Press.

Simons, Kushner, Jones, & James (2003) Simons, H. , Kushner, S. , Jones, K. , & James, D. (2003). From evidence-based practice to practice-based evidence: The idea of situated generalization. Research Papers in Education: Policy and Practice, 18(4), 347364.

Simons & McCormack (2007) Simons, H. , & McCormack, B. (2007). Integrating arts-based inquiry in evaluation methodology. Qualitative Inquiry, 13(2), 292311.

Stake (1978) Stake, R. E. (1978). The case study method in social inquiry. Educational Researcher, 7, 58.

Stake (1995) Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Zucker (2001, June) Zucker, D. M. (2001, June). Using case study methodology in nursing research. The Qualitative Report, 6(2). Retrieved from http://www.nova.edu/ssss/QR/QR6-2/zucker.html