Unfortunately, your browser is too old to work on this website. Please upgrade your browser
Skip to main content

AI in health care: hope or hype? With Professor Sir John Bell and Dr Axel Heitmueller Episode 30 of the Health Foundation podcast

Episode 30 |24 March 2023 |34 mins

About 1 mins to read

News of artificial intelligence (AI) is everywhere. We seem to be on the cusp of a revolution in how the latest AI models will change our lives – and health and care could be at the centre of those changes. 

AI will transform medicine, AI will allow doctorless screening and personalised prevention, AI will boost productivity, AI will make thousands of jobs redundant – so go all the claims. 

But is this hype or real hope? How will AI transform health and care services and the experiences of staff and patients? What’s been the progress so far? And how best to move forward safely? And with growing demand, staff shortages and a public spending squeeze, could AI be a key answer to sustaining the NHS itself? 

To discuss, our chief executive Dr Jennifer Dixon is joined by: 

  • Professor Sir John Bell, Regius Professor of Medicine at the University of Oxford and an adviser to the government on life sciences strategy, and to Sir Patrick Vallance’s current review of how to regulate emerging technologies. 

  • Dr Axel Heitmueller, Managing Director of Imperial College Health Partners. Axel has also worked as a senior analyst in the Cabinet Officer and Number 10 Downing Street.

Help us improve the podcast

Please email us if you have any feedback about the podcast.

Jennifer Dixon:

News of artificial intelligence is everywhere, not least with the release of the chatbot from OpenAI called ChatGPT last November. We seem to be on the cusp of a new revolution in how AI coupled with other technologies will completely change our lives. One huge area AI can be used in is health and care. AI will transform medicine. AI will allow doctoral screening and personalised prevention. AI will boost productivity. AI will make thousands of jobs redundant, so go all the claims, but is this hype or real? How will AI transform our health and care and the experiences of staff and patients? What's been the progress so far and how best to move forward safely and with growing demand, staff shortages and a public spending squeeze, could AI be a key answer to sustaining the NHS itself?

I was tempted to ask ChatGPT these questions, but instead I'm sticking with analogue, which is much more fun. So with me today to discuss all this, I'm delighted to welcome two humans. Professor Sir John Bell, who is Regius Professor of Medicine at the University of Oxford. John is well known but amongst many other things, he's a central advisor to the government on its life sciences strategy and he's also an advisor on Sir Patrick Vallance's current review of how best to regulate emerging technologies. And Dr. Axel Heitmueller who's Managing Director of Imperial College Health Partners, where he's been since it started in 2011. Now Axel has a long experience and interest in innovation and amongst other things, he's also worked as a senior analyst in the cabinet office and Number 10. Welcome both. Can you paint a picture both of you, of what you think AI could do for us in health and care, a realistic vision, what's already possible and what might be the case in the next decade? Maybe starting off with John.

John Bell:

I think AI is going to be one of the great disruptors in health care and like all disruptors, it takes a while for it to actually show that it's useful, safe, and effective. I've got a thing called a 20-year rule, which takes 20 years for these things to actually truly bite. And that's how long it took for monoclonal antibodies and that's how long it took for genetics and it'll take about that long for AI. But just to be clear, we've been at AI for more than a decade. So we're closing in on a situation where it's going to be probably used pretty substantially across a whole variety of health care domains.

And there are two areas where I think it's going to have the biggest impact. One is shifting demography, epidemic or chronic disease, huge pressure on health care systems all over the world, and we're short of manpower. And I think we're not that far away from getting the answer to a whole range of simple questions that would normally come from a health care professional provided to people through large language models like ChatGPT and others that would actually provide us with a pretty accurate view of what our symptoms might be related to and what we might need to do after that.

So that's going to be important in Western health care systems. As you know, the biggest challenge in the NHS at the moment is workforce. I've always had the argument that rather than training in another 2,000 medical students every year, it might be better to get our IT systems up to speed because you could potentially reduce the burden of health care activity on the workforce by making the whole system a great deal more IT and tech friendly and that's one way that'll work. Now the other important space for this, and we've been talking quite a bit about this in the global health community, is that if you think we've got a workforce shortage, you just need to go to Africa. And I can tell you they're really, really short of health care workforce and these tools could prove to be immensely valuable in getting some kind of even rudimentary health care system to the majority of the population and that could be very exciting.

So that's one domain. Then in another domain in health care systems is taking very structured data and as a result of machine learning technologies, unsupervised machine learning, you may be able to get to better diagnoses from imaging, from genomics, from structured digital data. And combining all those things together will make it even more powerful, which will give you much better diagnostic accuracy and will create a new diagnostic framework which will be done without a huge amount of input again from the workforce. So those methodologies are going on now and they look very powerful. So that's a second domain which I think is going to have a major impact.

Jennifer Dixon:

Just to press that for a minute. So what you're saying is a huge shift towards almost self-diagnosis and screening and risk factor management by patients, is that right?

John Bell:

Yeah, well of course we do this already because we've got NHS 111 people call up all the time and they'd get some advice, but the advice will not be as good as what you get from a decent large language model processor. So that's just going to improve the quality of the information that we get. But then you can take it further beyond the conventional episodic phone call to 111 and there could be a system whereby you're in continuous dialogue with a large language model processor that helps you make decisions about your lifestyle, makes decisions about what you should do and what you shouldn't do, what are your risks? We don't have a structure for deploying public health interventions now in the UK, doesn't exist. And so it's hard to imagine us building that up from scratch given the fact that there's no money in the pot. So I think we may need to use much more AI technology to help us do that.

Jennifer Dixon:

And Axel, what would you say is the realistic vision? Where is AI really going to bite into our experience in health and health care over the next 10 years?

Axel Heitmueller:

Yeah, I would broadly agree that the two big areas are how it democratises knowledge and access to health care, whether it's in developed countries or developing countries and also personalises health care. So the one size fits all approach that we are mostly taking to treatment and care will probably become one that is much more tailored towards our individual needs that can pick us up where we are and that can give us individualised care plans and interventions that we are only dreaming of now. I think I would add two other areas which are probably relevant. So I'm always intrigued that in health care, unlike other industries, we seem to start with the most complex aspect, which is the human to human interaction, diagnosis, symptoms, checkers and so on. This is not really where other industries have started. Other industries have started with a mundane back office, the automation, the drive to be more efficient. Health care hasn't started there and there's so much low hanging fruit.

We don't have bad management systems, we haven't back forwarded any of our equipment. We're really poor at procurement and inventory management. That's probably where a lot of other industries would've started and my hope is that is developing or will be developing over the next few years more rapidly than it has. I suppose the other area, and John knows a lot more about this than I do, is the whole drug development where AI actually has started to really make a big difference. In health care we're still waiting for AI to make a big difference beyond imaging and diagnostics where it actually is starting to make an inroad but actually in a pharmaceutical industry and on [inaudible 00:07:38] AI is starting to really make a big difference and maybe it is that space that will affect where we will see the fastest pace in the deployment of AI.

Jennifer Dixon:

You mean speeding up drug discovery and vaccines?

Axel Heitmueller:

Absolutely. I mean the speed by which we can now basically check for different combinations of molecules and so on is just incredible compared to the mostly manual way in which we used to do that.

Jennifer Dixon:

So just on a couple of things there, the first one is this what you were talking about Axel, which is the AI creating a lift of boring processes that are currently taking place. And I don't know if you saw Eric Topol's Deep Medicine and he's very tough on the decay of time, the decay of medicine over the last three years, the erosion of medicine, he says because of all the time spent with docs on keyboards and screens, which has just squeezed the caring time and dehumanised, et cetera, et cetera. And he was particularly talking about these systems that can abstract information from simply the conversation that the doctor's having with the patient and trial that up into notes that could be recorded. And he thinks that's the biggest areas where medicine will be transformed. I don't know whether you agree with him on that.

Axel Heitmueller:

Yeah, I think that is probably one area and the intriguing bit there is that I think in this near term it's not about replacing doctors or nurses or anyone, it is working in tandem with them. So almost having someone looking over your shoulder. And that's important for two reasons. So one is I think there's a perception that humans are really good at this stuff and they don't make mistakes. But as we know actually medicine is full of mistakes and doctors and care professionals make mistakes every day and so do actually machines. But the evidence at the moment is if you combine them you make way fewer mistakes. And so what skills does it actually take to have a human and an AI system work together in tandem is probably something that should worry medical schools and others because it is not something that we should take for granted.

That's definitely one aspect. The other one is that obviously knowledge in medicine is decaying quite quickly as well, so the doubling time and knowledge is accelerating. It's hard to get to the exact figures. I think there's a paper from 2011 that everyone quotes where whereby no medical knowledge should double every 73 days. I'm not quite sure that's where we are, but certainly it is doubling very quickly and in that world it is important that you have access to all this knowledge in a very efficient way. And that's obviously where some of these AI systems come in and where we have seen even the last four months, people are now playing with ChatGPT and Bing and others. They're not very good yet but they're getting there. And I think that's having someone behind you looking over your shoulder making those decisions will probably be a game changer. And as John said, eventually that will then democratise health care in a way that I can do this by myself at home with some confidence.

Jennifer Dixon:

Yes. And also give more time back to the docs that are currently doing stuff on keyboards that could actually then spend more time with patients as well as making better decisions with them.

John Bell:

I agree with everything that Axel has said, but some of this is not very clever. The problem with the IT systems that we currently have in health care is that they were designed 20 years ago and they're really clunky and you don't need any AI, you just need a system that works and we all spend all day every day on our phones getting all kinds of information quickly, efficiently, communicating with each other, doing all that stuff. We've got a generation of now young people, everybody under the age of 40 has lived in that world. They then go and work in a hospital, they end up with a computer terminal that takes five minutes to boot up, it produces a green screen like we had 30 years ago. It's unbelievably slow and clunky and access to health care data, that generation of prescriptions, all that stuff just takes forever and that's why people are fed up.

I think nobody would have a problem about IT enablement of what you're doing because we've seen it in every other walk of life and it just makes things faster, more efficient and slick. The problem is that we've invested in a set of systems that do exactly the opposite. And you are quite right Jennifer, to point it out because when I talk to junior staff and medical students and young doctors, the thing that frustrates them most is that simple things will take 20 minutes on a terminal where the quality of the processing and the quality of the tech function is terrible. So forget AI, just get a bit of tech that works in my view.

Jennifer Dixon:

Get the system to work, exactly. And the other thing I was going to pick up with you John, is okay, this great new future where we're all self-diagnosing and having risk factor management access to the best research in the world on our phones and things. Two things there, one is could that not unleash a whole set of demands that we can't meet? And secondly, doesn't that appeal to the worried well and the educated and a lot of ill health is concentrated in groups in society who would not respond so well to that kind of information. What do you say to those points?

John Bell:

I think that's not a feature really of AI, it's a feature of health care generally, isn't it? And that is relatively fit, healthy, middle class people first of all tend to be more hypochondriacal and they also tend to seek out health care much more effectively and efficiently than people from deprived of diverse backgrounds. It's possible AI will help with that. I think that's untested. But one of the interesting things is if you make it easy to access and you make people interested in the problem and I think you can make people from all strata of society interested in their health, you can make it accessible in a way that you can't currently make it accessible. So as you know, there's quite good data that some substantial part of the inequalities of health is that people from deprived backgrounds just don't get to the hospital, they just don't get to the GP, they just don't seek out health care.

If you made that easier for them, they might actually do more of that. And I think a good example of that of course is the ability of African countries to completely leapfrog the usual land-based communication stuff with the rapid advances in mobile telephony that have really transformed lots of aspects of African society. Now that's mostly finance related at the moment, but if you jump on that stuff and use it to manage your bank account and extract money and do all that stuff, it's not that big a jump to start to use it for other things like health care. But I'm quite prepared to say there's no data so I can't tell you for sure that's the future but it is one of the possible outcomes. Doesn't all have to be bad.

Axel Heitmueller:

Can I add a few points to this? So where there is a difference in AI is obviously that we train these algorithms on existing data. There's an example in the States where they looked at or they tried to predict basically what resources should be used by patients coming into a hospital. And it turns out that the algorithm assigned much higher values to White people than Black people. And the reason for that was that basically the algorithm trained on the predicted health care cost and because White people were using services much more extensively than Black people, you basically introduced into the algorithm quite a lot of bias. And so therefore what John said about we have that inequality already actually does matter for how well we can develop some of these tools going forward. And I think the other aspect is that a lot of this will hopefully be used also for public health.

Again, John alluded to it, we have to get into the prevention agenda and the problem there is that effectively there we look at population level rather than individuals and there's a risk that because of that lens inequity will actually get worse rather than better because we're not basing it on individual need by population based needs. And again, that is often a reflection of the inequalities that already baked into the system. So I do think we need to be really careful about the way we develop some of these algorithms to counter the inequality, but I agree this is not a reason not to use them, it's just we need to pay a lot more attention to this.

John Bell:

I think that's the key issue Axel, isn't it? And that is I think we're all aware of both bias and drift in AI activity. First of all, they're both significant issues that we need to track. The crucial thing is to be always aware of it and continue to cross-check and generate the quality control systems to make sure that you're not A, starting out with the biassed algorithm but also that it doesn't drift in ways that you don't want it to drift. So this isn't just uploading a programme and letting it go. You've got to monitor it continually for these key things like inequalities of health and disparities of access, but also disparities of the type of information that comes out of it. And we need a group of people who are thinking all the time about the ethics of these approaches because that's crucial.

Jennifer Dixon:

And of course it could be as time goes on that we have more applications of AI to the kinds of things Axel was talking about, public health, which is non-communicable disease for example. We might be able to through conjoining data sets have much clearer idea about the commercial determinants of health, about particularly in relation to obesity that we hadn't seen before the patterns. So that is all possible. I'm just going to turn very quickly to a niche question. Can you just categorise what AI is? There are different sorts, aren't they? Like narrow era AI, general AI, machine learning, deep learning.

Axel Heitmueller:

I saw this somewhere, sort of Russian dolls. So there is AI which basically tries to mimic human intelligence and that's the biggest runoff term in that space. And then you have machine learning sitting underneath which makes predictions but it's based fundamentally on labelled and structured data and that requires still a lot of manual input. You can train an algorithm to detect cats and dogs based on photos, but you have to label those photos. Then you get a level down and we come to neural networks and they start to really mimic the human brain in a way that their inputs and outputs and there's some preferences in between and they can tell us something about that relationship and that becomes suddenly a lot more sophisticated. The last level down is deep learning, which is probably the most sophisticated way in which we look at data at the moment and that can then indeed use unstructured data.

But I guess at the moment the way that AI is democratised has basically happened through generative AI, which is ChatGPT and being in other things. And the difference there is that it not just recognises cats and dogs, it can actually produce images of them and it mimics human behaviour in a way that obviously people have found incredibly appealing over the last four months. Hence, ChatGPT has a hundred million users a month. I'm not quite sure whether they're users or whether they're actually trainers and whether we are all contributing to the development of the algorithm. That's probably why we all have access to it. But that's broadly the categorization I would give.

Jennifer Dixon:

Is there an issue here that particularly if there's deep learning going on and whatever application is doing the deep learning, it's not a way that we would recognise because it's inside a Black box, is that going to be more of an issue over time as humans find it difficult to understand what's inside that Black box and therefore have more difficulty accepting what the results might be, particularly if they're counterintuitive? So transparency I guess is the point I'm making here.

Axel Heitmueller:

I think so. I mean, to give you an example, 15 years ago I worked in a trust and someone came and said, ‘Look, here's an Excel spreadsheet and all you need to do is basically put in 10 variables and it predicts the day of discharge.’ And it's a simple regression on analysis basically, but it's completely transparent. You know what you put in. Well, we didn't deploy it for a variety of reasons predominantly to do with incentives. And there's a question in my mind whether that has fundamentally shifted but now someone comes in and says, ‘Look, I have this AI algorithm and that can also predict day of discharge, but I have no idea what has gone into this algorithm or how it's working. So there's no transparency. And if you are internationally, what is one of the big barriers to the adoption of AI is definitely trust. But I would argue we didn't deploy the Excel spreadsheet and I'm not entirely sure what are the most fundamental reasons for why we're not deploying AI is so different from that experience 15 years ago.

Jennifer Dixon:

So let's move on to reality then and the state of play with the application of AI and health. And we've already had some comments, haven't we, from John about it would be nice to get computers to work inside the NHS. So we can talk about wider government strategy on life sciences and AI in a minute. But let's just talk about the rate limiting steps inside the National Health Service. NHS holds a lot of information and a lot of keys to acceleration of AI in future. So what are the main limiting steps here in both of your views? Maybe starting off with John because you started off with a data clunkiness.

John Bell:

But we need a much more tech-savvy IT infrastructure across the NHS and we need to be able to assemble data and you also need people to be able to get access to data, 'cause if you can't get access to data, you can't generate algorithms and you also can't come to any decent conclusions. So those are all things that I think are underway. They've been a lot slower coming than I would've hoped. Yeah, I remember talking about this 20 years ago and we haven't got there yet, but I'm hoping that some of the work that Tim Ferris and his colleagues are doing in NHS will get us to a much more sensible system. Part of the problem here is that there's no money to do this. Hospital trusts spend maybe one or 2% of their budgets on IT support. Most American hospitals spend five or 6% of their budgets on IT activities.

We're miles behind, we're not investing. And when you look at the money that's available to transform the national system, it's lamentful, frankly. So I think that's point number one. The second issue I think is equally important and that is this is the world that we're moving into and although the UK is pretty good at AI, the reality is that we haven't really made any effort to make the health care professions, broadly defined, familiar or effective at utilising these tools or thinking about them in the way that we've just been talking about. So there seems to be a massive gap between how we educate doctors for example, and their ability to understand, contribute to or utilise some of these interesting algorithms for better outputs. So that's a huge educational gap and I don't know how we're going to fix that.

Jennifer Dixon:

Just on that point, I asked a friend of mine, she's finished her medical college, six months into being a junior doctor, trained at Imperial. I said, ‘Was there any mention of AI or much on it in your training?’ She said, ‘No, nothing.’

John Bell:

Of course this has happened before. It happened with genetics as well. I remember lecturing the students in the medical school here on genetics 20 years ago and half of them were asleep and they weren't interested in, there wasn't much in the curriculum. That's all now changed, but it's taken 20 years to get people alert to the fact that it could be quite transformative technology, but the medical educationalists have something to be accountable for here because they haven't really pushed this at all. And it's partly because those of us who are the old folks in the system, this is all new to us as well. And with the exception of people like Axel who obviously right at the cutting edge of this, we know it's coming, but it would be very difficult for us to teach this. What you really need is people who know AI algorithms to actually help the kids in the medical professions to understand how it works and how they can use it. If you could fix those two things, you might have a chance of really making this work.

Jennifer Dixon:

Axel, what would be your main priorities here?

Axel Heitmueller:

I completely agree with John and I suppose before, at a few more. One question I have is, so they don't get taught AI, but there isn't much on analytics generally is my sense. So there's an acute lack of being curious about frankly health care works, what data could offer us. It is a very reactive system and in a very reactive system this is a tool that's not really needed to be perfectly blunt because it could be used in a much more proactive way to shape health care. And I think there is philosophically that's not where sickness care is and in many ways that's why it's so exciting to deploy some of this stuff in the prevention space. Just to add a huge to the list that John just provided, I do think there is something about the incentives generally in the system and that's partly reimbursement, but it's also partly whether we care enough about outcomes and improving actually the quality and safety of care.

I would argue we're not, because if we were then the indicators internationally wouldn't be quite as bleak as they are. There isn't something about regulation. So we haven't even really finished regulating digital devices, let alone AI. I mean, the FDA is a bit further, but people are still sceptical about some of the robustness of the evidence that the FDA is using. NICE is not there yet. And I think that we have quite a lot of catching up to do in that space. And then there is also what I alluded to earlier, which is this being bilingual as a clinician. So understanding the clinical aspect but also understanding actually the data and the tools that you have. So this thing that looks over your shoulder, how does it work with me? That's a profound change in which we do our work and that is not reflected in the training.

And then maybe the final one is distrust issue. I do think that a lot of people look at this and don't really know what to make of it because it's caught up in a former scientific, sci-fi space where people are a bit afraid of artificial intelligence because they don't understand it. And so what has happened over the last four months actually is quite helpful. A lot more people have had an exposure to what AI can do. Now obviously that hasn't taken away all the fears, so there's something about the public and professional engagement in this space to build that trust. And obviously big tech companies have quite a role to play in this.

Jennifer Dixon:

And that's a great list, both of you. And to which I'd also add methods of evaluation, particularly if there's large amounts of AI technology or other technologies reigning in the system to be able to evaluate these things faster. Let's just move on to overall strategy where I know John you said, and I know Axel, you've been thinking a lot, not least with the Tony Blair Institute and their report just out, clearly we're onto life sciences strategy. We've also got an AI strategy. We've got some of the architecture being built already in office via AI. We've had work on AI regulation with Patrick Vallance. We've also got Centre for Data Ethics and Innovation. We've got a new Department of Science for Innovation and Technology.

So there's quite a lot of action here, even though I think most people are saying that we are slightly behind the curve when it comes to some other countries, not least US and China of course, but we are trying to catch up. From reading all that's written about this stuff, the architecture is beginning to be built here, but there's still some very significant issues. And one of them that comes across loud and clear is computing power and related to it, investment. Investment in discovery and investment in scale up. What's your thoughts there of the state of play? What else needs to be put into this architecture?

John Bell:

I'm a glass half full person. I think we're in a pretty good place in the sense that we've got a lot of scientists who are very good at AI, not necessarily in the health space, but across the engineering computing science statistics domain. I think where we have challenges are first of all high performance computing. So everybody's been bouncing around saying, ‘Let's do quantum, let's do quantum.’ The reality is quantum's probably still some distance off, it will inevitably get here. But the advances in high performance computing along with AI algorithms has been massive in the past five years and it's moving very, very quickly. So you can't really do this without significant high performance computing capabilities. And I think one of the problems with the science infrastructure budgets of the last 20 or 30 years is they haven't really focused on that and as a result we're behind in that space.

And then with regard to how this will be applied in health care, the thing I'm most worried about is the NHS just doesn't seem to get it. They're not really interested in this and they still make access to data almost impossible, no matter whether it's anonymized or pseudonymized, can't do AI without data. So in our current state of affairs, we're going to be in bad shape in the health space simply because although we have potentially the best data sets in the world as we saw in Covid, the reality is that we've closed it all down with a massive regulatory framework that means that nobody can get at it and improve the quality of the data. So those are the two big obstacles in my view. And if we don't fix those two, then we will definitely get left behind.

Jennifer Dixon:

Yeah, excellent. What's your perspective?

Axel Heitmueller:

I suppose the question is what is different about AI compared to the general problem that we have with the adoption of innovation in the NHS? And it's a long list of barriers obviously that get in the way. Fundamentally, I think it comes down to incentives and the burning platform to use some of these things. Why change? We can just muddle on. And so I think this is not fundamentally different from a generic issue that we have in the UK, that's really the uptake is just slow for a variety of reasons. And if we crack that, then I think we stand a much better chance to deploy AI as well.

But then this is an international problem. Yes, there are some systems in the US that deploy AI because they can and they have resources et cetera, but it's not necessarily that the whole of the US is using this or the whole of Europe is using this. This kind of dissemination of best practice and innovation is a huge issue in health care partly because it is unlike FinTech where a lot of this has been deployed. It's a competitive market and if you don't do it, you suffer financially and otherwise. That is just not the case. What is actually the burning platform for health to deploy some of this is unclear at this stage.

Jennifer Dixon:

Yes. And are we consigned as a small nation to be nimble on developing AI applications and discovery of them only for them to be then scaled up and sold elsewhere as we've seen with some companies, is that what we think is going to happen to us?

Axel Heitmueller:

In many ways we're not small enough is my argument, right? Because if you look at Israel, which is the place where a lot of this is happening, the pace of development over there and also the application in some of the health systems is amazing. And then you see other very digitally enabled governments like Estonia, some of the Scandinavian countries. So maybe we're just not big enough but also not small enough to get it right, we're somewhere in the middle. That might be one of the challenges.

Jennifer Dixon:

What are the three big priorities that you'd like to be seen to do now to improve and progress things in health and health care for patient benefit?

John Bell:

I think the first and most important thing is to get the NHS tech thing right, and that's going to require some money, but it'll cost a lot less than training a whole new old school workforce. So I think people just need to get sorted out and get that in place. That'll make a dramatic difference to the efficiency by which our health care system runs. That's number one. Number two, health care systems everywhere are going to run into the same problem, and that is there's no real demand management. They're all sickness oriented systems focusing mostly on episodic care for a severe disease. And somehow you've got to shift that to a much more prevention oriented, early diagnosis oriented type structure using AI tools and large data sets to be really effective. So if you give me a magic wand, I would definitely go after that one. And the third one relates to this issue about training. Don't forget that although the computers are doing the AI, you've got to have people who know how to use them. And that means you've got to train a workforce that's coherent and sensible about how they work.

Jennifer Dixon:

Thanks so much. Great list. Axel.

Axel Heitmueller:

We have to sort the trust issue around data generally. We are not going to get to AI if we don't get to some basic data. We have got that wrong repeatedly, so we really need to invest in that. Secondly, I think we should get way more excited about the boring back office automation first before we get too excited about some of the patient facing applications. And we could probably make much more of an impact in that space, both in terms of efficiency and cost savings, but also the quality of care that comes out of it. And then I think the third is can we address this culture of anecdote that permeates the NHS? We're allowing people not to engage with evidence and data in a systematic way. And if we correct that, then I think we create a pool for sophisticated data tools including AI.

Jennifer Dixon:

So we must leave it there. Fascinating discussion. I hope you agree and a topic we'll truly return to in future because it's so big and important. Thanks as always to our guests, John Bell and Axel Heitmueller for their insights. For more information on the topics we discussed, please go to our show notes as ever where we've put some of the interesting reports that I referred to and some of our guests did. Next month we'll be looking at stress and how it weathers our bodies and uncovers premature ill health and what to do about it. So join us then. Meantime, thanks for listening. Thanks to Kate Addison and Leo Eubank at the Health Foundation and Paddy and team at Malt Productions. And it's goodbye until next time from me, Jennifer Dixon.

Subscribe

Subscribe to our podcast on your preferred platform to receive future episodes when they’re released.

Related content

You might also like...

Kjell-bubble-diagramArtboard 101 copy

Get social

Follow us on Twitter
Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more