Hospital-wide mortality rates and measuring quality: smoke alarm not smoke screen

8 May 2013

Following the Francis Inquiry, we need to look at the roles of Hospital Standardised Mortality Rates (HSMR) and Summary Hospital-level Mortality Indicators (SHMI). It may seem obvious – given that Mid Staffordshire NHS Foundation Trust had a persistently high HSMR and poor quality of care – that the debate is settled around whether or not hospital mortality rate is a good measure of quality. It would be easy to draw the wrong conclusion.

The mainstream media talks of precise numbers of avoidable deaths as if the expected mortality calculated by the model was a fact, which is clearly not the case. The developers of the models are actually careful to acknowledge their imitations but, unfortunately, producing a specific number leads others to infer precision and validity. Whilst generalising from a specific instance is dangerous, we can use Mid Staffordshire to illustrate some strengths and weaknesses.

We should start by saying that, had Mid Staffordshire chosen to take the high HSMR (or indeed other information that was available) as an alert and evaluated quality of care, it is likely that action to improve care would have been taken earlier. Clearly then the HSMR would have made a valuable contribution. This is similar to the role that Early Warning Scores (EWS) have in clinical practice: they warn us that there may be problems that are not otherwise apparent.

When developing alerts, sensitivity and specificity matter. To add value they need to err on the side of caution. So every patient with a high EWS will need to be assessed but not all will need intervention, and not every patient who is sick will be identified. A nurse who is concerned about a patient would not fail to ask for help if the score was low.

Similarly with HSMR, high values should be investigated but are not proof of poor quality of care. Just as importantly, ‘normal’ or ‘low’ values must never be taken as proof that care is satisfactory or good. This would be true of any monitoring system, but particularly so when considering hospital-wide mortality, where there are many problems relating the measure to quality.

Mid Staffordshire was also one example where changes in coding markedly improved risk adjusted mortality, but not necessarily quality – a frequent, unintended consequence of public reporting and implicit or explicit target setting. It’s one reason why, when reporting changes in HSMR, it is essential to also report both the observed and expected values.

An unwelcome recent development is for trusts to respond to reports of poor care by quoting their ‘good’ HSMR or SHMI as evidence of good quality. The thinking behind this is just as flawed as assuming poor care from a high value. It appears that part of the problem at Mid Staffordshire was that the board assumed all was well, and then sought evidence to support that view.

It would be a tragedy if others made the same mistake by over-emphasising HSMR rather than ignoring it. HSMR or SHMI should be used to ask questions but not to provide answers. They may serve as a smoke alarm, they should never be a smoke screen.

Following the publication of the second Francis report, the Department of Health decided to investigate quality of care in 15 other trusts with high HSMR or SHMI. We can be confident that the team charged with this will find scope for improvement in them all, if only because that is true everywhere. Whether they will be those most in need is doubtful, but the risk is that this is how they will be portrayed.

It has been said that ‘All models are flawed, some are useful’. We should add that a model that is useful for one purpose might not be for another. Hospital wide mortality models may be useful to sound an alarm, but are not suitable for making either negative or positive judgments about quality of care whether between institutions or over time.

Measuring safety is important and frustratingly difficult but that does not justify over-simplification. It requires a measurement framework: a single measure is a fantasy.

Simon is a Quality Improvement Fellow currently spending a year the IHI in Boston, and is Divisional Medical Director, NHS Lothian.

You might also like...


How can the NHS make the most of risk prediction tools?

If developed wisely, risk prediction tools could help clinicians provide earlier diagnosis and better care. But could using A...

Press release

A positive step towards a digital future but technology needs to be driven by patient need

Health Foundation response to government announcement of new investment in technology and AI.


Reducing emergency admissions from care homes: a measure of success for the NHS Long Term Plan?

Jennifer Dixon asks how the NHS Long Term plan can support efforts to reduce emergency admissions from care homes.

Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q Community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more