Editorial

Journal of Children's Services

ISSN: 1746-6660

Article publication date: 29 November 2013

151

Citation

Axford, N. and Little, M. (2013), "Editorial", Journal of Children's Services, Vol. 8 No. 4. https://doi.org/10.1108/JCS-09-2013-0031

Publisher

:

Emerald Group Publishing Limited


Why do we do what we do?

Article Type: Editorial From: Journal of Children's Services, Volume 8, Issue 4.

Nick Axford and Michael Little

Why do we do what we do? It can be a sobering question to ponder. Sometimes the answer is easy: because we have to, or it makes us happy, or it accords with our values. Often, however, we act with a specific purpose in mind: we do something because we think it will achieve a desirable goal. The act may be altruistic and the goal distant. But what if we are wrong? What if we are wasting our time?

In his book Cooked: A Natural History of Transformation, Michael Pollan (2013) explores why we do what we do when we cook. He looks at what he calls the “transformative fundamentals”: cooking with fire, cooking with water, cooking with air and cooking with earth. Intriguingly, he finds that developing a better understanding of the science behind these techniques helps him to cook better.

So, why do we do what we do in children's services? Why do we train parents in parenting skills, teach children how to recognise their feelings, encourage children's participation in sport, and so on? We usually have an idea, but are we as clear as we could be, and is our argument plausible? The answers to such questions could have far-reaching consequences.

In this edition Jikkemien Vertonghen and Marc Theeboom suggest that we need to understand better the mechanisms by which martial arts produce positive outcomes. As they put it, we need to make the black box “greyer” and evolve a “white box” approach. Elsewhere, Kevin Haggerty and colleagues identify promising parenting interventions for reducing adolescent problem behaviours. Their first criterion is that the intervention must be theory based. Those based on well-supported theories, they argue, are more likely to be effective.

For us, all of this points to the value of logic models. A logic model is a representation of how an intervention is supposed to work. It describes simply and clearly why an intervention is expected to achieve desired outcomes for children and families. Sometimes it is called a “theory of change” to reflect the idea that it tracks the steps predicted to change a problem situation.

When we work with innovators who are designing services we get them to develop a logic model. We think there are good reasons for doing so. To start with, an intervention underpinned by sound logic is more likely to be successful than one that is poorly thought-out. Logic models also help communicate the rationale of an intervention to a wide audience, including the practitioners who deliver it, the children and families who receive it, and the people who commission it. Practitioners often make the same discovery as Pollan: that understanding why you are doing something helps you to do it better. Lastly, logic models help evaluators to know what to measure.

There is no set way to develop a logic model, but perhaps the most common approach focuses on risk and protective factors. At its heart is a sketch of one or more routes leading to the poor outcome. The intervention is mapped onto this, showing how its components reduce risks or boosts protective factors. For example, an intervention to improve children's behaviour might include training to reduce inconsistent parenting and mentoring to provide the child with a significant adult. A simpler method involves articulating a series of positive “if-then” actions: if we deliver this activity then this objective will be met; and if this objective is met then the desired outcomes will be achieved. A further approach lists the various components of a logic model in columns representing, respectively, intervention components, risk and protective factors, and outcomes. Only then does it try to connect the dots.

We think the first approach is preferable but it is not always the easiest, and it may not be the best place to start. Several factors affect this. One is whether the intervention in question is new or not. The risk and protective factors approach is harder to reverse engineer for interventions that have been designed in the “real-world” as opposed to the laboratory-like conditions in which scientists often work. The state of scientific knowledge in a given area should also have a bearing on the approach adopted. For instance, the “if-then” approach may work better in an area where the evidence on risk and protective factors is weak and it is difficult to chart chains of effect. Then there is the practical issue of what is achievable. In our experience, the risk and protective factor approach can overwhelm people, whereas the simple column approach offers an easier route in. It encourages innovators to be specific about intervention activities, risk and protective factors and outcomes. Once they have done this it is easier to spell out how they connect up – or, as we think of it, to “explain the arrows”.

As a logic model is being developed two critical questions need to be answered. The first is whether the hypothesised connections are plausible. For example, do the connections ring true with the children and families who will receive the intervention? Do they make sense to practitioners? Are there circumstances under which the intervention might not work? What are the likely unintended consequences? We recommend convening a group of critical friends and asking them to try to pick apart every connection. The second question is whether research evidence supports the hypothesised connections. This requires at least consulting one or more scientific experts and ideally conducting a review of the relevant literature.

The task of developing a logic model is challenging. It is easy to compile unwieldy lists of risk and protective factors and outcomes, and hard to articulate the connections between them. The best interventions have logic models that are precise and modest. In the course of developing a logic model it may also become apparent that some intervention components are not a good fit with the desired outcomes. This can be frustrating, but ultimately it is also the purpose of the exercise – to check whether the concept is coherent. The process is iterative, so pruning is to be expected.

No substitute

There is a danger that, by focusing on positive connections, a logic model appears to imply that the intervention will work for everybody. It won’t. Logic models deal with probability. Each one is a hypothesis that providing the intervention increases the chance of positive outcomes. It does not guarantee that the intervention will work. There may also be unintended consequences, some of which could be negative.

So, evidence-informed logic models are no substitute for high-quality impact evaluation. But in order to understand how it works it is also necessary to test whether the hypothesised links in the logic model actually materialise. For example, if the intervention seeks to improve child behaviour by reducing inconsistent parenting, does parenting improve, and does this contribute to improved behaviour? More such “mediator analysis” is needed in our field.

And if such an evaluation reveals that the intervention doesn’t work – what then? It doesn’t automatically mean that the logic model is at fault. It could be that the intervention wasn’t implemented properly, or that it went to the wrong children. This shows why measuring fidelity of implementation is so important. But that is another editorial.

Reference

Pollan, M. (2013), Cooked: A Natural History of Transformation, Allen Lane, London

Related articles