Editorial

International Journal of Health Care Quality Assurance

ISSN: 0952-6862

Article publication date: 23 March 2010

449

Citation

Hurst, K. (2010), "Editorial", International Journal of Health Care Quality Assurance, Vol. 23 No. 3. https://doi.org/10.1108/ijhcqa.2010.06223caa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2010, Emerald Group Publishing Limited


Editorial

Article Type: Editorial From: International Journal of Health Care Quality Assurance, Volume 23, Issue 3

Adapting the generic ISO 9001:2000 quality standard to healthcare, although rewarding, is not easy. For example, Matthias Helbig and colleagues’ ENT case study underlines ISO’s challenges and benefits. Those familiar with ISO 9001:2000 will immediately recognise the standard’s characteristics the authors lay down in their article, but what may be new is the policy and practice issues emerging from ISO’s application to a hospital ear, nose and throat services. Planning and implementing ISO’s structures, processes and outcomes are thoroughly explained by the authors, notably how they ensured leadership and proper communication – discussion that readers will find helpful. Similarly, they outline their quality manual – a major ISO certification component – also helpful to managers and practitioners thinking about ISO accreditation. The authors come clean about the time, cost and effort required to achieve ISO certification. Indeed, their analysis is intriguing in many respects.

We publish two important critical care unit (CrCUs (intensive care, intensive therapy, coronary care or high dependency units)) continuous quality improvement (CQI) projects in this issue. First, Mahi Al Tehewy and colleagues remind us that critical care is an important but expensive health specialty. It is also one of the most bewildering and distressing healthcare experiences for patients and relatives. Fortunately, CrCU standards are high, resources are usually adequate and staff well-educated and trained. But what about patients who slip through the service quality net? Patient outliers with prolonged length of stay (LoS) or those dying unexpectedly are two important examples. Is it possible to highlight these patients and avoid potential problems? The authors develop measures that connect patient stay, mortality and illness severity in a CQI context. They use APACHE II illness severity indicators (after reviewing measurement options). Their patient dataset is impressive although they were disappointed about how much patient information was missing from supposedly routinely collected data. They exposed their data to inferential statistical analysis and although their findings confirm published work, there are differences that have service-quality implications. The most important outcome, however, is that outlier patients; i.e., those with indicative APACHE II scores who died, need investigating to see what went wrong and what might be done to improve patient survival.

The second CrCU article we publish in this issue is Seetharaman Hariharan and Prasanta Kumar Dey’s comprehensive CrCU CQI study that takes a more global approach than the Al Tehewy et al.’s article. The authors critically review options for evaluating CrCU performance before, intriguingly, combining cause and effect diagrams and logframes, thereby creating a novel and powerful service quality evaluation technique. This dual analysis not only highlights service failures and their underlying reasons, but also points to solutions. Service improvement projects easily fall-out of their analyses, while the authors ensure that all stakeholders were involved in their service evaluation. Their before-and-after evaluation showed post improvement project differences favouring the projects they implemented.

In the past 20 years, patient complaints moved from “annoyances to be dismissed” to “gems that should be treasured”. Complaints and their handling processes are powerful mechanisms for improving healthcare. Sophie Yahui Hsieh’s article aimed to strengthen connections between patient complaints and CQI. Her detailed patient complaint structures, processes and outcomes case study in one Taiwanese hospital generates fascinating insights. She resurrects the critical incident technique, now 50 years old, to explore complaint dynamics. Health service complaint types are fairly well understood. Mechanisms for handling them probably less so and the horses-for-courses issue (selecting the right complaint handling approach to specific complaints) is probably devoid of empirical evidence. The author moves from analysing the substantive complaints to a more detailed patient-compliant pathway analysis – a more meaningful and powerful approach it seems. Importantly, she includes long-term service quality issues that address the “What can we learn from our complaints and how can we prevent them occurring again?” question rather than just short-term complaint resolution. Sadly, however, not all staff participating in the case study took a long-term, CQI approach to patient complaints. Nevertheless, we are shown that complaint pathways vary according to patient age; service type (e.g., out- vs in-patient); complaining mode (oral, written or telephone) and staff handling characteristics. Importantly, the author explains that selecting the wrong approach for resolving complaints leads to complainant dissatisfaction; that is, different complaints need dissimilar handling procedures and supporting guidelines.

Do CQI projects make a difference? If we think they do then what is our evidence? Is evidence robust enough to withstand psychometric evaluation? These are crucial QA questions and we are fortunate to publish a substantial project from Johan Thor and his Swedish colleagues that evaluated 67 QI projects over five years in one Stockholm university hospital. Readers will agree that insights generated by the study are not only absorbing but also useful practically for CQI project managers. Each project commissioned by the hospital’s senior managers has common features: doctor-led but multi-disciplinary; patient oriented and action-research based. The authors used mixed methods including: observing and analysing project meetings, and document analysis. While most projects met their CQI goals (the main success measure), some failed owing to:

  • lack of manager support;

  • insufficient resources;

  • poor “buy-in”;

  • variable commitment; and

  • weak measures.

One surprising outcome is that female-led projects were more successful although the authors ask us to treat this outcome cautiously owing to the female-led projects’ underlying features. A less surprising finding was that “low-hanging fruit projects” were more likely to succeed. The two vignettes – one successful and the other deemed a failure – are illustrative. We hope the authors publish their how-to-do it manuals that emerged from the projects. The article’s dos and don’ts should be required reading for CQI project managers.

We know from studies that go back to the 1960s that medication errors are a common problem, but looking at Ana Jiménez Muñoz and colleagues’ data in this issue, perhaps mistakes are more common than we realise. Their detailed audit generates alarming findings. And if we accept that medication errors lead to most patient adverse events then it is more likely their findings are crucially important for QA managers. The authors show from the literature, backed by their audit, that errors occur at all medication administration phases. Their rationale is that if medication errors are understood then they can be prevented, so the auditors observed staff during all but one (dispensing) medication phase. They note, however, that medication error definitions and metrics are not standardised in the literature so they carefully underline exactly what they mean by a medication error – a useful step if we are to benchmark accurately. From their audit data they counted 6,460 medication administration “deviations” before classifying them by administration stage and especially importantly, their actual or potential patient impact, including prolonged hospital stays caused by more serious errors. Consequently, they had no difficulty recommending how practice can be improved such as computerising as much of the medication administration process as possible so that an automated alert system can benefit practitioners working under heavy workloads.

Patient satisfaction questionnaires remain the most popular QA data collection method. They allow us to gather volumes of information from wide-ranging and representative samples. However, preparing and posting several hundred questionnaires is time consuming and expensive. Erica Amari and her colleagues remind us, in a remarkably sobering fashion in this issue, that web-based questionnaire distribution can be completed in the time it takes to prepare and post one questionnaire. Clearly, web-based surveys are more efficient and effective, not least because data are automatically lodged into analytical software thereby saving even more time while reducing transcription error risks. But are the data different because response dynamics are dissimilar? The authors write a fine critique of data collection options open to QA researchers before comparing telephone and web-based methods during which similar questionnaires were used to collect information from parents of children undergoing paediatric day surgery. Their study design is robust, but there are limitations that authors report honestly and fully. Analytical techniques are comprehensive, covering psychometric issues and especially important, response differences between the two administration modes. Surprisingly, web-based responses were more negative possibly owing to their anonymous and private response modes although an order effect cannot be ruled out (telephone completion then on-line). Based on their experiences and findings, the authors favour web-based data collection methods. Their recommendations are not limited to web-based surveys – they have wider implications for QA researchers.

Keith Hurst

Related articles