Unfortunately, your browser is too old to work on this website. Please upgrade your browser
Skip to main content

Putting performance in the spotlight

Amid the spring budget announcements was a pledge to provide £3.4bn for new technology to support an NHS ‘productivity plan’, alongside an additional £2.5bn in next year’s NHS budget for ‘day-to-day activities’, including reducing waiting times.  

This throws the spotlight onto productivity, presenting it as the panacea for stretched services and record waiting times. The focus on productivity in turn shines a light on the performance of providers as the main way we measure how ‘productive’ the NHS is.  

As we move closer to a general election, health care performance will be scrutinised as a proxy measure for the governments’ performance. The media will be looking for headlines and chances to report on the ‘best’ and ‘worst’ performers in what’s been described as the 'national lottery of hospital survival'.  

Giving a more rounded picture of performance

Of course, the public are interested in their local providers’ performance (when I had children I chose my maternity provider using Dr Foster) and reducing waiting times remains a top public priority.

However, generating a national barometer of performance by comparing providers, especially using only single measures such as waiting lists, is fraught with problems. These were clearly laid out just before the last Labour government came to power. At best they only give half a story, at worst they can lead to ‘creative reporting’ or ‘gaming’.  

It’s therefore timely to shine the spotlight (especially given Labour’s idea to introduce performance league tables) on research funded by the Health Foundation, as part of our Efficiency Research Programme.  

This research developed a novel way of measuring the overall performance of mental health care providers. The work not only shows that using the right combination of measures gives us a more rounded picture, but also adds vital novel thinking to the ongoing debate of how health care performance should be measured and understood.

Creating an appropriate measure for mental health care

What’s novel about this approach is that the researchers have combined both measures of health benefits (how effective was treatment) and non-health benefits (how was the person’s experience) with costs. This gives a summary measure of overall performance that is, as Goldstein and Spieghalter called for, ‘appropriate’, not only for regulators and policymakers, but also for patients, providers and the public.

They’ve also tested their model in a challenging context: mental health care. Mental health trusts provide a range of services (acute and community) across multiple sites. The average length of stay for inpatients is longer than for physical health. Mental health problems can also be long-term and fluctuating, so non-health benefits can be particularly important for people, as relationships with care providers can affect outcomes.  

National performance in mental health services does not seem to attract the same political interest as surgery often does. However, the recent rise of 2.8 million people not in work, rises in demand for mental health care and the resulting costs to the economy brings the activity of mental health providers into the performance spotlight.    

Developing a more meaningful and robust model

A blog doesn’t provide enough space to wax lyrical about the researchers’ detailed and multi-layered approach to creating a summary measure for mental health care performance. But, in the face of new million pound incentives for meeting performance targets alongside a lack of consensus on how best to understand performance in health care, it is worth taking a bit of time to explore why this approach offers a more meaningful and robust model.    

Firstly, the researchers generate a summary of overall provider performance, drawing on two datasets (2013–15) that enable them to include both health and non-health measures for all 54 mental health trusts: the Community Mental Health Survey (CMHS) and the Mental Health Minimum Dataset (MHMDS). They then use this data to estimate costs.  

Secondly, they ensure measures are meaningful by recycling previous research identifying the 10 measures of quality in mental health care of most importance to staff and people receiving care, and their relative value (interestingly in this context waiting times was rated the least important).  

Thirdly, they ‘contextualise’ the data by ensuring all measures are adjusted for factors that could impact on performance, such as whether people had a care co-ordinator. In a final nod to the issue of the ‘black hole of statistics’, they test how ‘sensitive’ their model is and compare their rankings of providers to Care Quality Commission rankings.  

Demonstrating that who chooses matters

Their results underline the problems of performance measurement. Results change depending on what measures are included.  

Exclusion of non-health benefits makes a considerable difference: 5 of the 13 top performing providers shift downwards in rankings. Comparing rankings with CQC assessments suggests a good degree of convergence, although the positions of the top and bottom performers differ. For example, two CQC rated ‘Outstanding’ trusts sit in the middle of this study’s rankings.  

Differences with CQC rankings likely reflect different purposes. The focus on patient-centred choice of measures in this study, rather than what matters to regulators, tells us (in a bit of a tongue twister) that it’s not only that we should measure what matters, but also that it matters who chooses the measures.  

Of course, there are limitations to even this model of economics wizardry. It only looks at performance from a hospital provider perspective and reflects the preferences of certain groups. But of course, what matters to those giving and receiving care should matter to commissioners, policymakers and regulators.

What this study has achieved is the creation of an overall measure that should matter to all those with an interest in provider performance. In doing so, they’ve given us a more meaningful, sensitive and robust barometer. As we head into a politically charged year, this could be used to understand differences in performance, rather than name, shame and blame, in 'a spirit of collaboration rather than confrontation' or million pound competition.  

Justine Karpusheff (@JKsheff) is Assistant Director of Research at the Health Foundation.

This content originally featured in our email newsletter, which explores perspectives and expert opinion on a different health or health care topic each month.

Sign up to our newsletter

Receive the latest news and updates from the Health Foundation

Also in this newsletter

You might also like...

Kjell-bubble-diagramArtboard 101 copy

Get social

Follow us on Twitter
Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more