Unfortunately, your browser is too old to work on this website. Please upgrade your browser
Skip to main content

Don’t just do something (and other top tips from the UK Evaluation Society conference)

16 May 2016

About 4 mins to read
  • Louise Thomas

I recently attended the UK Evaluation Society conference, where the theme was building a culture of evaluation. I’ve been going to this conference for a few years but this year the energy in the room was different – there must be something about ‘culture’ that captivates and energises us evaluators. 

At the end of all the presentations, panel discussions and a lot of conference coffee I was left wondering how you would go about building a culture of evaluation  - where commitment to understanding if and how an intervention ‘worked’ runs through every team, at every level of an organisation. And where outcomes and data collection are planned at the outset and at every stage of delivery and where findings lead to informed action. After all, even those that self-identify as evaluators would have different working definitions of what exactly evaluation is, depending on methodological leaning, organisational context and how findings will be used to name just a few influencing factors. And, how do you know when and whether you have a culture of evaluation?

In a pan-sector conference, it was good to learn how many common experiences there are, as well as how those outside of health care grapple with these issues, in sectors as diverse as international development, energy and climate change, community development, transport and political science. 

Here at the Health Foundation we have commissioned over thirty independent studies evaluating complex, quality improvement interventions  - some of these have been published and some are still underway. We also have an in-house data-analytics team developing innovative methods designed to evaluate and accelerate change. We have produced blogs, guides and reports that capture what we have learned in order to support others working in the field and yet, I think there’s more we could do to build a culture of evaluation. 

It will surprise no-one that there’s no magic bullet to establishing this type of culture. But three organisational characteristics presented by David Fleming and Tasneem Mowjee from Itad Ltd resonated with me in particular. And I think these apply whether you’re a ‘randomista’ - love that phrase, thanks Professor Picciotto - or realist (other evaluation approaches are available!).

Firstly, evaluation can be at its most powerful when it’s seen as supportive and framed around a learning agenda. Evaluation shouldn’t feel threatening and yet, in my experience and from talking to others at the conference, it often is. Confusion (or in some sectors collaboration) with monitoring or performance management doesn’t help. Yet for many of us working in this field we are there to help; to surface assumptions, develop shared views and support delivery as someone with some independence from the intervention or programme under investigation.      

Secondly, a culture of evaluation requires people power. We need senior leaders to champion their value, as well as staff in all parts of the organisation to be engaged in planning, supporting and utilising evaluations. In order for this to happen, people need to be familiar with and committed to their concepts and requirements. We evaluators can return the favour by ensuring we communicate in simple, jargon-free language and design studies that meet organisational information needs and timeframes.

And finally, but perhaps the most important thing we can do to build a culture of evaluation, is to be brave and bold enough to prioritise reflection over doing. Taking time to take stock can be one of the most countercultural things we do. You just have to look at a busy commuter train carriage and see how people fill this ‘downtime’ with work, Facebook or writing ‘to do’ lists (me).  And I appreciate this is a really tricky ‘sell’ at a time when the NHS is under so much pressure to act; to reduce inefficiency and maintain or even improve quality. But sometimes the most effective thing to do really is nothing.

Without the pressure to act, we can be insightful, creative and strategic, which may save resources in the long-run. We can take the time to understand what worked well in past evaluations and what can be improved upon. We can co-create a theory of how an intervention will work so there’s a shared view among and between stakeholders. We can agree meaningful evaluation questions for the end users, providers and commissioners of the intervention or service we are evaluating. We can appropriately plan for data collection and analysis so that these questions are answered at important milestones with the best available evidence to inform decision-making.

It’s a tall order and no doubt takes time and resources to embed (and we haven’t even mentioned complexity!). But, doesn’t it feel like this would be a great place to start in any health or social care organisation? 

At the Foundation we have been thinking about these very issues (and many more) as we develop evaluation strategies for our three key work streams: improving health service delivery, making health policymaking more effective and wanting a healthier UK population, including our approach to rapid cycle evaluation. So, I’m keen to learn from others working in the field. What’s your experience of building a culture of evaluation? What worked well? And what worked less well of course. I’m keen not to just do something…

Louise is a Research Manager at the Health Foundation

You might also like...

Kjell-bubble-diagramArtboard 101 copy

Get social

Follow us on Twitter
Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more