Why the interest in data-driven tools for health?
Data-driven tools can help clinicians and patients by collecting, analysing and interpreting relevant data. They range from decision support tools that use machine learning techniques, right through to wearable items that analyse biometric data and relay this to clinicians. Such tools are increasingly being used to address major health and care challenges, including reducing time to diagnosis, and increasing efficiency and capacity of services. Uptake of these tools, developed by in-house analytics teams, academics and commercial organisations, has accelerated in recent years – all in the context of a health and care system under monumental pressure.
The recent news that Faculty AI’s tool to predict A&E admissions would be rolled out over 100 trusts (after being trialled across 11 trusts) had a mixed reception. Some expressed concern about the role of commercial organisations in NHS data projects, and reported disappointment that there was not an easily accessible evaluation of the tool.
At the Health Foundation we've been engaging with interest, motivated by our desire to see data-driven tools harnessed in ways that highlight and address health inequalities (rather than further entrench them).
We therefore brought together a group of experts from the NHS, academia and the third sector to discuss what the priorities should be when developing, deploying and evaluating data-driven technologies. This blog sets out the priorities raised during that workshop, and poses questions for those working in the health and care system to reflect upon as we accelerate into a data-driven world. These form three important themes:
Supporting responsible innovation
Building capability for evaluation
Workshop participants often returned to terms like equity, trust, empowerment, fairness, problem-focused, and transparency. There was a clear consensus that tools should be developed, deployed and monitored with these principles at the centre. In a landscape where different actors have differing sets of priorities, these shared intentions are vitally important.
For academics, producing research outputs and attracting funding are measures of success, whereas industry partners will be motivated by profit and marketability. The development of NHS-led tools will be focused on patient impact and local effectiveness. These mixed incentives are not necessarily incompatible; in fact, we should be seeking to mobilise these motivations around the shared values outlined above.
For example, we could invest in monitoring the different measures of success that are important to each stakeholder, and openly sharing all learning with the rest of the system. The recent Goldacre review put forward a number of practical recommendations to adopt a more open approach to NHS analytics. We need to see more meaningful, value-led co-production between partners across all sectors, so we can create an environment that fosters trust and transparency.
Supporting responsible innovation
Innovators have enjoyed efforts from funders and central agencies to support new technologies being tested and rolled out to the system. For example, the NHS AI Lab launched the AI Award, making £140m available for AI innovations led by academia, the private sector or the NHS. Importantly, this award programme commissions independent evaluations for the most mature technologies. Workshop participants felt that this has helped normalise the need for external evaluation and underscored the importance of robust evidence building.
DHSC’s Data Saves Lives policy paper commits to supporting regulators to better set out standards and guidelines. The Ada Lovelace Institute has also recently published a report demonstrating the application of an algorithmic impact assessment within the NHS for the first time. These initiatives send a clear message to the sector – responsible innovation is worth the investment.
But for those without additional funding, and the structured approach to evaluation that comes with it, there remain questions about accountability and responsibility. Who should be accountable if a tool is not effective, or even harmful? Who should bear responsibility for ensuring that robust monitoring and evaluation are undertaken: the developers of the tool, the organisation that deploys it, or should they each bear responsibility? And what does that look like?
Building capability for evaluation
The NHS is under immense strain, facing an unprecedented backlog for elective care, record ambulance wait figures, a workforce crisis, and new financial targets to meet. Data-driven tools are often hailed as an answer to these challenges, but understanding which tools will provide the most effective solutions can be difficult for decision makers under pressure.
Workshop participants expressed concern that many individuals responsible for procuring these tools simply do not have time, nor access to the appropriate information, to develop a good understanding of what technologies are available, appropriate, effective, safe and economically sound for their system. For those that have bought or developed a tool, they still face the crucial task of evaluating whether it is working as a solution and what kinds of impacts (intended or unintended) it is having on care planning and delivery.
Evaluation involves defining important metrics, collecting the right information in a timely manner, using appropriate analytical methodologies, and communicating the findings in a way that helps shape ongoing use of the tool. Participants reported a scarcity of robust evaluation of tools due to a lack of skills required for real-world monitoring, as well as issues with the way that tools are designed.
Participants called on funders to promote good practice (a framework for trustworthy AI was proposed by Gardner et al in 2021), urged DHSC, NHSE and regulatory bodies to incentivise and fund robust monitoring and evaluation, and encouraged developers to consider evaluation and unintended consequences at design stages. To capture the potential of data-driven tools we must resolve these issues; how can we swiftly build capability for real-world evaluation so that the solutions to these challenges are applied safely and transparently?
Looking to the future
There is a real need for NHS analytics teams, academics and the private sector to create tools that can help us address the urgent challenges that the health and care system faces. In order to do this safely, effectively and equitably, we need to see more mutually beneficial partnerships based around shared values, greater incentivisation of responsible innovation, investment in analytical literacy for decision makers and improved capability to monitor and evaluate new tools in the real world. Lines of code written today may well still be running in many years to come. There is an imperative to develop data-driven tools in a safe, effective and equitable way that reflects the future that we want for our patients, health care professionals and citizens.
If you are working to solve these questions too, we would love to hear from you. You can get in touch by emailing us.
Ellen Coughlan (@EllenCoughlan) is a programme manager in the data analytics team at the Health Foundation.