Asking health and social care staff to go the extra QA mile

Keith Hurst (Independent Research and Analysis)

International Journal of Health Care Quality Assurance

ISSN: 0952-6862

Article publication date: 11 May 2015

341

Citation

Hurst, K. (2015), "Asking health and social care staff to go the extra QA mile", International Journal of Health Care Quality Assurance, Vol. 28 No. 4. https://doi.org/10.1108/IJHCQA-03-2015-0031

Publisher

:

Emerald Group Publishing Limited


Asking health and social care staff to go the extra QA mile

Article Type: Editorial From: International Journal of Health Care Quality Assurance, Volume 28, Issue 4

Waiting times are seen either as mechanisms to control demand or an unfortunate service-demand consequence that reduces patients’ quality of life. Waiting times are politically sensitive, which can cause a government’s downfall; an issue that the UK’s coalition government faces in May. Consequently, many healthcare services have seen significant cash injections for waiting time initiatives. But are unfocused cash injections enough? What initiatives should be funded: dedicated clinics, additional theatres and more staff? Claudia Amar and colleagues in this issue explore a Canadian province-wide project that aims to reduce waiting time. Their approach is elegantly simple: select three case studies that depict failing, mediocre and successful projects (based on waiting time), and explore, qualitatively, reasons for success and failure. Readers will like factors that influence waiting time are categorised as systemic and organisational. Some influencers are predictable; e.g., product champions and leadership roles. Others will surprise our readers; i.e., a regional specialty hospital’s markedly positive influence.

Poor communication between professional and patients remains among the commonest reasons why patients complain about healthcare. I doubt if there is much more evidence about communication-based complaints that we need to know so that we can resolve and prevent complaints. But, exploring the other end – what patients like about communicating with their professional carers (i.e. compliments) – has mileage. Elena Platonova and Richard Shewchuk in this issue segment their patients into three groups according to satisfaction their primary care physician before testing if doctor-patient communication affects satisfaction and judgements about physician competence. The inter-relation between these variables is complex and communication-based service improvement isn’t straightforward.

Quality assurance (QA) specialists sometimes argue that healthcare quality is so important (i.e. a life or death issue) that unidimensional QA programmes may not be enough; i.e., QA managers should jointly employ at least two independent QA methods when setting quality standards and measuring to what extent services meet them – a technique known as triangulation. Should triangulation be compulsory or at least recommended and if so, what QA approaches should be combined? Are two enough? Asgar Aghaei Hashjin and colleagues in this issue explore Iranian hospital QA triangulation. They describe what QA methods are used in Iran, which are impressive; i.e., the Iranian Government’s mandatory and voluntary hospital accreditation approach places many developed countries in the shade. To avoid overkill, however, the authors strive to generate an optimal balance between methods. It’s clear that parallel QA activities are demanding too much time and resources in some Iranian hospitals, while staff in other hospitals aren’t paying enough attention to QA. Striking a balance is challenging.

What shelf life should QA and related definitions have? Are the early twenty-first century definitions still viable? Shouldn’t we periodically review and update them as new QA issues come to light? In this issue, Paul Lillrank systematically reviews current definitions. He points out their flaws and consequently their misdirected influence on healthcare quality metrics in a changing healthcare world. Readers will find his conceptual framework unusual and at least illuminating, and possibly force them to jettison some current QA definitions.

Clinicians rely heavily on medical laboratory staff to provide them with accurate test results so that correct diagnoses can be made, appropriate treatment is started and that the patient’s recovery is monitored. Consequently, we are receiving more articles medical laboratory specialists. However, should laboratory specialists rely on off-the-shelf QA frameworks; i.e., those developed for inpatient services, for example, or should they first check with their customers what service issues are appropriate. Vinaysing Ramessur and colleagues take the latter approach. Their SERVQUAL-based user satisfaction questionnaire, refined by service users, allows laboratory managers to specifically think and act on most laboratory QA issues.

Articles showing how quality indicators were developed so that QA managers could initiate a quality cycle are common. Selecting methods to refine and approve quality standards, on the other hand, are less common; especially in primary care. Ailís ní Riain and colleagues in this issue explain how they used the Delphi research method to refine a myriad general practice quality standards. The care taken by the authors to recruit and retain stakeholders for the Delphi exercise highlights several valuable lessons for any practitioner starting similar exercises. The techniques used to limit the Delphi participants’ workloads (thereby improving response rates) are particularly valuable. The team’s hard work paid dividends; significant general practice staff volunteered to implement the final product and were able to identify areas needing improvement. Experts are people who make difficult tasks look easy, which epitomises the research team behind this publication.

Several QA-related methods that at least maintain quality and often lower costs are available. One approach, care pathways (CPs or multidisciplinary care plans) is explored extensively by Alex Jingwei and Wei Yang He in this issue. First, they summarise China’s healthcare performance and subsequent healthcare reforms to address the problem (in itself a fascinating account). If CPs are to work (and the authors doubt they do) then the macro and micro issues preventing CP implementation (notably perverse incentives) are a significant challenge for China’s healthcare managers. I doubt that these challenges are unique to China.

Asking busy healthcare professionals to participate in service improvement work can be difficult when there are barely enough staff to meet day-to-day workload. Even with committed staff, involvement in service improvement programmes has problems that add to their burdens. Annelie Khatami and Kristina Rosengren describe a qualitative study that explored how staff felt about being involved in a specialist service improvement exercise and the challenges and problems they faced. The author’s micro, meso and micro framework provides a useful framework for thinking about and acting on service quality projects. The informants suggest useful actions that make managing service improvement projects easier.

Keith Hurst

Related articles