Unfortunately, your browser is too old to work on this website. Please upgrade your browser
Skip to main content

Predicting the future health prospects of patients is a compelling prospect for the NHS, allowing clinicians to deliver care more quickly to those who need them most, while preventing future harms and improving outcomes.

Recent government investment has focused on accelerating access to artificial intelligence (AI) and other technology that could improve care, and this is likely to include risk prediction tools. These tools enable health professionals to use models, scores or algorithms to predict the probability of a patient developing a condition or the likelihood of a particular outcome among individuals or patient groups. As an example, Google Deepmind have developed an algorithm that detects patients at risk of acute kidney injury in hospital.

Using these new technologies could bring many benefits. If developed wisely AI could support all manner of improvements, such as earlier cancer diagnosis, earlier treatment of kidney injury, and helping to reduce the number of missed appointments.  

But as with the introduction of any technology, there is also a risk of causing unintentional harm. The real-world data used to develop these tools can often be skewed by the existing structural and social inequalities in people’s health and access to health care. Therefore, applying these risk prediction models in practice could inadvertently make health inequalities worse.

In this blog we examine how these issues arise, and what people developing and using risk prediction tools can do to tackle them.

Making sure the NHS has the right data and people to develop and use the new technologies

The processes and calculations used in risk prediction tools are often developed using data taken from health system sources like electronic health records. This means the data was originally collected for a different purpose – for example, information is collected about outpatient appointments primarily so that payment can be received by the hospital. 

In this sort of dataset, some patient groups may be over or under represented, due to differences in the way people access health care, leading to an incomplete picture of health need. When this data is used to develop a model for predicting risk, various flaws in the data can be amplified and end up skewing the results – leading to biased decision making.

An added problem is that research and innovation using data is not spread evenly around the country. Instead it’s often clustered around hospitals and communities with an existing tech sector or university. There is a risk that the algorithm developed using datasets from these areas will not be as effective in some communities, or may not be usable in all parts of the country. 

For example, a tool to predict emergency admissions developed using data from hospitals in central London might not work well in rural Devon. To a certain extent, tools can be adapted to work better when used on different data, but this requires analytical skills and IT infrastructure currently unavailable to most NHS trusts, as well as more research. Investing to create well curated NHS datasets, and to build analytical capacity in the NHS so that existing data can be used wisely, will help mitigate this risk. 

Ensuring that decision makers understand the intended, and unintended consequences

As well as issues around bias and harms that can be unintentionally programmed into tools, risk prediction may also exacerbate some well-known challenges around prevention and screening. 

Risk prediction tools might help clinicians detect currently hard to diagnose conditions like dementia, or ovarian cancer. If this works well, it could be good news for the patient: reducing the amount of time taken to be referred or receive prompt treatment. But this would need to be balanced against the risk of false positives, where patients without the condition are incorrectly identified, resulting in unnecessary treatment, anxiety and worry for the patient and their families. 

In some cases, risk prediction tools have increased demand for health care without clear benefits to patients. Care also needs to be taken to ensure technology is deployed in a way that reduces health inequalities. Learning from national screening programmes, modelling the benefits and the risks of new AI prediction tools and their impact on health inequalities, could help to avoid entrenching or increasing such inequalities.

The next steps are as important as the quality of the risk prediction model

Being able to predict that something is likely to happen is only the first step. There also needs to be an effective intervention to implement. 

For those patients identified as being likely to miss an outpatient appointment, or need an emergency admission, the reasons behind the risk will be complex and often difficult to act on. Risk prediction tools have long been used to identify patients who are at high risk of emergency admissions, for example, but being able to act on such data has proved more challenging. 

A recent evaluation of one tool showed it did not produce the anticipated benefits to patients and the NHS. A lesson from using these tools in practice was that although the GPs often knew who the high-risk patients were without using the tool, they didn’t have access to the right preventive community services to be able to reduce hospital admissions. 

Developing tools with clinicians and patients, and continuing to work in partnership during roll out and evaluation, will help. Done right, this would ensure that risk prediction is used at the right point in the patient pathway, and that staff find these tools help them to provide better care in their local context. Risk prediction models should also engage patients in their design, ensuring that the use of data to predict risk is publicly acceptable, and that tools reflect patient needs and preferences for their care. 

With growing investment in data and technology, there’s an enormous opportunity for the NHS to improve care for patients. To make the most of this, we’ll need to bring together clinicians, patients, computer scientists, analysts, improvement practitioners and management to create tools that work for everyone.

Sarah Deeny (@SarahDeeny) is Assistant Director of Data Analytics at the Health Foundation, and Emma Vestesson is a data analyst in the Improvement Analytics Unit.  

This content originally featured in our email newsletter, which explores perspectives and expert opinion on a different health or health care topic each month.

Also in this newsletter

You might also like...

Kjell-bubble-diagramArtboard 101 copy

Get social

Follow us on Twitter
Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more