Unfortunately, your browser is too old to work on this website. Please upgrade your browser
Skip to main content

Exploring the potential of AI to help address health inequalities Interview with Adam Steventon and Brhmie Balaram

25 March 2021

About 4 mins to read

Our new Artificial Intelligence (AI) and Racial and Ethnic Inequalities in Health and Care research call supports the advancement of AI-driven technologies in health and care to improve health outcomes for minority ethnic populations in the UK.  

The £1.55 million call is jointly funded by NHSX’s NHS AI Lab and the Health Foundation and enabled by the NIHR. It will fund projects across two categories of research: understanding and enabling the opportunities to use AI to address inequalities, and optimising datasets and improving AI development, testing and deployment. 

In this article, Brhmie Balaram, Head of AI Research and Ethics at NHSX and Adam Steventon, Director of Data Analytics at the Health Foundation discuss the programme and its aims. 

Why this work is needed 

Brhmie: We first spoke about this partnership in July 2020. For context, at the time, COVID-19 was disproportionately impacting minority ethnic communities and it wasn’t clear why, but wider social determinants of health were being highlighted.  

At the same time, Black Lives Matter was making the headlines. Racial justice was at the forefront of our minds and that really influenced our thinking about the potential implications of AI-driven technologies and the work we wanted to support. 

Adam: This partnership also built on the Data analytics for better health strategy that the Health Foundation launched in January 2020. Health care is lagging behind other sectors in its use of AI, but we think there is tremendous potential, for example in how it can help to diagnose conditions and improve the quality of care. But using AI also brings an element of risk, as there are instances where relying on these technologies has adversely impacted on minority ethnic communities. 

Brhmie: One example is a study by Obermeyer and colleagues that looked at an algorithm being used to allocate resources for follow-up health care in the US. That algorithm was disadvantaging black patients and allocating resources to white patients based on the amount of money they were spending on their care (a factor which doesn’t necessarily correlate with whether they needed that care).  

There have also been cases where algorithms are less accurate for ethnic minority patients. For example, dermatological algorithms that have mainly been trained on patients with fair skin tones, making them less accurate at identifying malignant skin lesions on darker skin.  

Adam: The potential for inequalities or disparities in racial and ethnic minority health outcomes is something that is often touched on in quite vague terms, without saying what those disparities are. This work is really an opportunity to get specific about what sort of health inequalities AI could potentially exacerbate and how, as well as working out how to mitigate that risk.  

On the flip side we’re also saying, the technology has a lot of potential, so how do we leverage that to close the gaps? 

A two-pronged approach 

Brhmie: The research call is comprised of two categories. The first category is looking at how to account for the health needs of minority ethnic communities when applying AI. The second category is more about how to improve this technology and how it is integrated into clinical pathways. 

Adam: The first category of the call is ‘Understanding and enabling the opportunities to use AI to address inequalities’. This is about the specific issues facing minority ethnic communities and understanding if there’s potential to use AI to mitigate problems. It’s also about working with communities to understand what technology they’d like to see and what might help them to achieve better health outcomes.  

Brhmie: The second category of the call is ‘Optimising datasets, and improving AI development, testing and deployment’, and it’s looking at aspects like how we design AI and how we can improve the datasets underpinning the technology and their usage.  

This might include interdisciplinary research to understand what data to use when developing AI or computational techniques that, for example, might help identify possible sources of racial bias in AI.  

When it comes to deployment, we want to encourage the development of resources that could inform best practice, guidance, and evaluation frameworks for AI with racial equity in mind. 

Making a positive difference 

Adam: Brhmie, you and your team are in a unique position to champion responsible, ethical AI across the system. We’re hoping NHSX can throw its weight behind findings that emerge as part of this programme, and also help to develop the market that will solve the issues highlighted in the research call. I think the action that will come from this is exciting. 

Brhmie: What you’re articulating about the potential impacts is important. We have said that the projects could run from between 12 and 24 months, and at NHSX, we’ve allocated time at the end of the projects to integrate the findings from the research into the Lab’s work, where possible, to ensure the safe, ethical and effective adoption of AI in health and care. 

Adam: Research in this area is often about addressing the risks that technology poses to minority ethnic populations. When we were developing this call, we both thought there was the potential for AI to be used for good and to reduce health inequalities. As well as addressing the negatives, it’s important to look at where AI could make a real difference to people's lives. We want to ensure that everyone benefits. We’re hoping that this call will bring in some good ideas from researchers and minority ethnic communities. We’re looking for examples of where AI can make a really positive difference to improving people’s health care. That might include making changes to things like detection and screening, improving quality of care, or understanding some of the root causes for poor outcomes.  

Brhmie: Another thing to keep in mind is that when we talk about these problems, people often only focus on the biases in datasets. But this call is trying to widen people’s understanding of what the problems are across the pipeline for AI development and deployment. We want to identify where and how it’s possible to intervene and enact change to benefit marginalised communities.  

This content originally featured in our email newsletter, which explores perspectives and expert opinion on a different health or health care topic each month.

Also in this newsletter

You might also like...

Kjell-bubble-diagramArtboard 101 copy

Get social

Follow us on Twitter
Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more