Unfortunately, your browser is too old to work on this website. Please upgrade your browser
Skip to main content

AI technologies are advancing rapidly. Yet when it comes to AI in health care we're still in the early stages. The prize could be big – the question is what will it take to realise the benefits?

The applications of AI in health care will be far-reaching and profound, from high-quality personalised treatment advice made instantly available to automated systems that can cut bureaucracy, free up staff time and reduce costs. 

All this is exciting and could help with some of the big challenges ahead. But what of the risks? The current emphasis among policymakers is on AI safety – but a range of other considerations will need attention like serving the public interest, inclusion, cost, accountability, autonomy, privacy and more. And how can the NHS and social care rapidly get up to speed with all these developments? 

Join our Chief Executive Jennifer Dixon on location with expert guests including: Effy Vayena (Professor of Bioethics at the Swiss Institute of Technology); Alastair Denniston (Consultant Ophthalmologist and Honorary Professor at the University of Birmingham); Ashish Jha (Dean of Public Health at Brown University) and David Cutler (Professor of Applied Economics at Harvard University).
 

The Health Foundation (2023). What do technology and AI mean for the future of work in health care?

House of Commons Science, Innovation and Technology Select Committee (2023). The governance of artificial intelligence: interim report

UK government (2023). UK's AI Safety Summit 2023.

Institute for Government (2023). How is the UK government approaching regulation of AI?

Financial Times (2023). How will AI be regulated?

Air Street Capital (2023). State of AI report.

OECD. OECD Artificial Intelligence Papers.

The White House (2023). Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

Jennifer Dixon:

Hi there. I'm in Maine on location this month at an interesting retreat on artificial intelligence and health care. I'm going to be interviewing a few guests to understand the fast developing and exciting field. But before that it might help to have a bit of basic background on artificial intelligence. What do we mean by it? Well, one of the simplest definitions I found is, ‘The use of computers for automated decision-making to perform tasks that normally require human intelligence.’

So why is all this so significant? Well, in health care, the applications can and will be very far-reaching and profound. From high quality and personalised diagnostic treatment advice instantly available on your phone, backed by the latest global research, to automated administrative systems that can cut through bureaucracy and reduce costs. So, all this could help us with some big challenges we've talked about in previous podcasts. In particular, not enough resources and the mounting burden of illness.

Everyone is very excited. On the one hand, AI will save us, but also everyone's very worried, AI will annihilate us, hence the Bletchley Park Summit recently and the focus on safety and on security. And earlier this year you'll have noticed that over 1,000 AI experts publicly called for a pause so that we could take longer to assess the risks.

Internationally, there's huge effort now to set out some principles or values or a framework or guardrails to guide us in this field. And the UK government is developing some architecture, for example, there's an AI strategy, there's a central coordinating unit, the Office for AI, and there's a new AI Safety Institute that's just been announced. We've had a white paper earlier this year outlining a pro-innovation approach to regulation of AI and lots of international agencies, the UN, the OECD, the EU, and even the White House, among many others, have also suggested frameworks and the approach to regulation and legislation.

The emphasis thus far has been on safety, and we'll put some information about all of this into the show notes. But as we'll hear from our guests today, safety is only one aspect to consider. The other big questions include, in an area with huge commercial interests, who should make sure that the public objectives are served and how? What are the objectives and principles to guide us, how to foster trust among the public for progress? And where is the public voice in all of this, for example, in setting priorities? Is the data on which these models are trained diverse enough or could they be biassed? Who's accountable for AI, the commercial companies that develop the kit or the health care provider who is using it? And how can the NHS and social care get up to speed fast with all these developments?

So here's Effy Vayena, who is Professor of Bioethics at the Federal Institute of Technology in Zurich, on how to engage the public. But warning, there's a bit of background noise as I decamp to a restaurant to interview our guests.

Effy Vayena:

I think there's lots of principles, there's lots of organisations still trying to come up with frameworks. On the one hand that shows that people are truly interested and they think we have to do something about it, but the question is, at this point, how do we really translate those kinds of frameworks into action in real practice? And I think if I would put my dollars, I would put them in translation of those principles, rather than into going on and on about more principles.

Jennifer Dixon:

And refining them again. So some of the principles we are talking about here today are safety, obviously that's uppermost, isn't it? But then there's also principles of fairness, inclusivity. Are there others that you think are very important here to guide us?

Effy Vayena:

Well, these are the obvious ones, right? There is also autonomy, to what extent we all, we're going to maintain our autonomy, ability to make decisions about ourselves and about our health. Something that it's a pillar in bioethics and in law, privacy remains important and it faces different kinds of challenges with AI. And then there's this other big area, maybe a principle of sustainability and the impact that these technologies will have on our planet. Something that I think we still need to consider.

Jennifer Dixon:

And how well developed is all this discussions about AI in health care and the principles and the regulatory framework in Switzerland at the moment?

Effy Vayena:

Well, I wouldn't say that it's extremely well developed in Switzerland, perhaps with the exception that the World Health Organization, which is in Switzerland-

Jennifer Dixon:

Yes, indeed.

Effy Vayena:

I think [inaudible]. There was a WHO guidance document about AI in health and the ethics and governance of AI in health, so there's one big document. In Switzerland itself, I think we're more kind of waiting and watching what others are doing, although we are the number one innovation country in the world.

Jennifer Dixon:

Is that right?

Effy Vayena:

Yes.

Jennifer Dixon:

More than Israel?

Effy Vayena:

We're ranking first-

Jennifer Dixon:

Wow.

Effy Vayena:

Consistently over a number of years. We're also a country that needs innovation in health and probably some help from AI because we are the second most expensive health care system in the world. We've got a lot of work to do and hopefully to benefit from the technology. But right now, I wouldn't say that we're a pioneer when it comes to governance of AI or the frameworks of it. We're thinking about it, but we're not extremely active yet.

Jennifer Dixon:

I know this conference has been a lot about how to engage the public in discussions about the principles, the objectives of AI and how it should be developed. And from the outsider's point of view, Switzerland is very, very good at asking its systems all sorts of different things. But tell me what you are thinking about the kind of methods by which the public might be engaged in some of the key arguments.

Effy Vayena:

We are a direct democracy. Again, so far we haven't deployed that sort of system in discussing or making decisions about AI, perhaps because it's a little early. But also I think before we get to that type of engagement with the public, we need to go the previous steps with having this deliberation with the public. I still feel that the conversations about AI in general or in health in particular, they're done among experts. They're done in our panels, they're done in our journals, and then you get out in the lay press, those impressive headlines about how AI is going to kill us or how it's going to save us. That's not the kind of deliberative process-

Jennifer Dixon:

No.

Effy Vayena:

That we need to have. We have some models from other areas of innovation and technology. We have citizen juries, we have moderated platforms where citizens can learn, can also raise their own concerns. So I think we have tools, but I don't feel that we actually use those tools in any specific context extremely successfully.

Maybe AI gives us an opportunity by offering additional tools that we can use. My feeling is that we still need to have these conversations and deliberation with the public at the community level where people maybe are also more invested. So, for example, what kind of impact a technology will have in my community rather than in the big world or in my country. And start deploying those approaches that are going to truly engage people to raise concerns, to learn and make a contribution.

Jennifer Dixon:

We've had discussion here about who holds the public interest. Does there need to be a guiding hand when you can think of all the conflicts coming from, for example, the commercial backers of AI, the developers of AI, the scientists who are engaged, lots of vested interests, but is there appetite in your view in Switzerland for the federal government to act in this way?

Effy Vayena:

Well, we're seeing some discussion about strategy at the federal level about some initiatives in that sort, but again, Switzerland is organised in a way that the action is at the cantonal level. I believe for issues like this one where a lot of it is uncharted territory, a kind of more concerted effort at federal level probably might be one way to promote a direction that then the cantons in particular, for example, can take on. But beyond the country level, I think in Europe we might need, again, a more continental even approach. And I personally would like to see a more global coordination. We know at the moment that multilateralism is struggling a bit, but if you look at those technologies, and some of them are not going to be just specific to health, they will impact health but they will be much bigger than health.

Jennifer Dixon:

And they might be sold directly to the public.

Effy Vayena:

Exactly.

Jennifer Dixon:

Bypassing all governance payers.

Effy Vayena:

Exactly, exactly. So again, if we don't have some kind of global standard or harmonization, I'm not sure how we're going to avoid the pitfalls that we want to avoid. So hopefully that's a direction that maybe developers or scientists or the doers of the technology, it might be more, that may able to reach some sort of consensus than the political level.

Jennifer Dixon:

I mean, we speak actually on the day that the White House has issued an executive order, which is laying down some framework to all developers within the jurisdiction of the United States, despite the different political queue of all the individual states. So if it can happen here, then presumably it can also happen across Europe where we are much more aligned on basic values.

Effy Vayena:

Some of the basic values, yeah.

Jennifer Dixon:

Indeed.

Effy Vayena:

Hopefully. Thank you very much.

Professor Alastair Denniston, an ophthalmologist and academic at the University of Birmingham is leading research into evaluation and regulation of AI. And he has some words now about how to engage the public and its importance.

So we're here in Maine, we're talking about AI and health care and with a particular focus on large language models. Can you just pick out what you think some of the biggest insights [are] for you?

Alastair Denniston:

It's been interesting that the focus, how it's been stated as sort of broad AI models as agents, as tools, but it's perhaps unsurprising that even though large language models weren't called up specifically in the remit, actually that's where the attention has been. I think that the areas which have been particularly challenging are around control, if that's the right word, of data in terms of know that these models, unlike more narrow applications of AI in medicine, these models have been generated on enormous data sets that aren't on a traditional consent, opt-in type model. There's been really interesting conversations, thoughtful analysis on what this means for patient autonomy. What does the right to be forgotten, if we look at this in a in a more sort of GDPR context, what does that look like in the context of large language model? I think also what it'll be trying to holding a touch of, is the opportunity to improve patient care. And actually a really strong theme has been around patient as the leaders. This is not something that's being done to patients or on behalf patients, this is all of us together as that wider public, as humanity, with different roles, some as patients, some as carers, some as health practitioners, some as engineers, trying to find a way together.

Jennifer Dixon:

I mean this is just such a complex environment and in a sense you'd think the public has to be engaged at very different levels. I think one of the areas is clearly how the field at a strategic level is developing across health care. Particularly if you think about the NHS to make sure that, for example, commercial interests about developing AI in some areas that are lucrative doesn't trump public interests, which is where illnesses, which is where people may not have much money, where there isn't going to be that much of a profit. Have you had any thoughts prompted about how to engage the public and maybe drawing on your work of Standing Together?

Alastair Denniston:

I think the first thing it needs to be, transparent about it. Sometimes we've found in a lot of our work around trying to improve practice, whether it's in design and delivery of clinical trials, of AI interventions, whether it's around transparency of the datasets being used to build AI health technologies and new reference standing together, which to me, looks at its diversity and inclusion in order to help ensure that AI models are equitable and are inclusive. I think what we've found is that transparency, it's our greatest friend as a first step. Opening up those conversations so that then it's not a small group of people trying to set a path on behalf of humility, but rather that actually all of us as public and patients, other stakeholders are having an open and public conversation about how we balance it [inaudible]. I think that's the first step.

And then the next is, I think to have formal processes which help us set those priorities. One of the interesting things that has come up is the sort of danger of getting this wrong then actually if the path it's problematic, if we get things wrong along the way, there's a real danger that the public will very reasonably push back and say, ‘Well, if we tried that, it was disaster that we don't want to go there.’ Actually there's a broad consensus that this was something, this is really precious big win for patients and the public widely. Then I think there'll be much more understanding. Okay, well we won't get everything right... But it's worth, we understand that we’re committing learning here, [inaudible] we will work together.

Jennifer Dixon:

Thank you so much.

So the worry is that if the public aren't involved properly in this developing field, particularly over the use of their data and there isn't enough dialogue, openness or communications with the public, then this could significantly slow down development that we could all benefit from. So what's the perspective from the US where a lot of applications will be developed? Here's Ashish Jha's perspective. Ashish is the Dean of Public Health School at Brown University and formerly advisor to the White House on COVID-19.

Ashish Jha:

AI has clearly come very quickly onto this national stage as a tool that we have to put our arms around. I mean, this has been long coming in terms of development, but over the last year we have seen massive gains in AI technologies that is going to have a profound impact on health care. And we really have not thought through how to maximise the benefits, how to manage the risks. And so this meeting I think gets that conversation going.

Jennifer Dixon:

And it seems everybody's at the similar point, or at least, if you think about the US as today there's been a White House executive order, we've had the EU AI Act, this is about to be passed, in the UK, a white paper on regulation, pro-innovation regulation. Can you just say a little bit about where you are in the US about policymaking to guide sensible AI development?

Ashish Jha:

Certainly the White House, other parts of the US government, I think recognise that we need a strategy here that leaving this completely unto itself, letting the market drive it fully is not going to lead to optimal policy outcomes. There's some very complicated issues here. I mean, how do you regulate AI? What are other policy tools, accreditation, financial incentives? There are other ways of achieving optimal social outcomes and I think what we're seeing is the US government get involved in that conversation, lay out some groundwork for safety and protecting people, but also starting to provide guidance to the marketplace, because we know there's going to be a very large private investment in this area, and we want to make sure we guide that investment in ways that really improve health and wellbeing.

Jennifer Dixon:

So we talked here today about the kind of principles that should guide sensible development. First among all of those was safety, and I think everyone would agree that these systems have to be safe in health care. But also we talked about access, about inclusion, we also talked about cost-benefit. How active do you think a pro-innovation US federal government would be interested in going beyond safety and into some of these other areas that sound a bit like big government don't they?

Ashish Jha:

Here's where I think we're going to see engagement. I don't think you're going to have the federal government prescribe exact specifics of which tools can and cannot be used for what purposes. But I can imagine, for instance, Medicare, which is the largest ear of health care in America, setting out some very specific pay for performance or other kinds of programmes that encourage certain types of health care services or health outcomes that can then drive AI investments in those areas. I can imagine the Federal Trade Commission being very involved in making sure that AI systems are competitive and that there is not a monopoly by one organisation. So you are, I think, going to see the government play a very specific role, not in shaping the entire market, but making sure that bad things don't happen and also making sure that we're pushing towards better outcomes for patients and for people.

Jennifer Dixon:

And just the regulatory architecture in health care to make sure that safety is uppermost and that other principles like access and inclusion are followed through, what needs to be done to this architecture or is it pretty solid as it is?

Ashish Jha:

Well, I actually think this is a place of substantial challenges. We don't have a good regulatory structure for managing AI. I don't think this is something that the Food and Drug Administration is equipped to or is going to be able to do. Obviously Medicare has an important regulatory role, but usually they have a pretty high bar for getting involved in something called conditional participation, where you really have to be pretty egregious to lose your CMS, your Medicare licensing. So that's going to be very unusual.

Jennifer Dixon:

Oh, I see, yeah.

Ashish Jha:

I think this is going to fall probably a lot more to private sector accreditation, certification. And then the government is going to sort of think about what role can it play to make sure that those accreditation bodies are actually doing a good job. I think there's a lot of important work ahead here.

Jennifer Dixon:

Yeah, similarly in the UK, I must say. And so lastly, where's your hope for all of this? Where do you think the gains will be?

Ashish Jha:

I am actually very optimistic that the AI-based tools are going to make very large gains in health care and health around the world. I mean, certainly once you get outside of the high income countries, if you look around the world in middle income countries, low income countries, even when people have access, the quality of care they receive is abysmal, people often don't have access to more specialty services. And AI can augment the health care workforce globally quite substantially. So I think that's a place where we can see huge gains, even in places in the United States. There are some places with lots and lots of doctors and specialists. I think of Boston. And lots and lots of places in America where there's very few, even in countries like the United States, you're going to see places where you can really benefit a lot. So we absolutely have to look at the downside risk. We absolutely have to manage safety, but net-net, if we do the policy stuff right and if we do our job right, this should be a huge boon for improving access, improving quality, and ultimately improving health outcomes for everybody, but particularly for the underserved, who have a hard time accessing the system we have now.

Jennifer Dixon:

Yeah, where there are distinct labour shortages. And indeed in the UK we've got this demographic winter coming over the next 30 years where we'll have an ageing population at the same time we don't have the workers. So this is another area for all of us, isn't it?

Ashish Jha:

Exactly. And I think again, if we do it right, this can really augment the workforce in a way, they can deal with burnout and actually make the whole health system much more effective. And that's what we should be shooting for.

Jennifer Dixon:

So Ashish mentions, payment incentives are a strong nudge to providers and doctors in the US on how to use and spread innovation. Here's David Cutler, a well-known economist from Harvard University. He had this to say about the uses of AI and what should policymakers be thinking about when considering designing a payment for AI?

David Cutler:

So it is a big mistake to try and fight the payment, so what do I mean by that? That means that if you want AI to be helpful in reducing costs, but you pay a lot every single time AI is used, then they're going to have a tendency to overuse it, not just use it correctly. Or conversely, if you say, ‘No, I don't want to pay for it I want you clinicians to adopt it.’ And then the clinicians say, But how am I supposed to be able to afford to adopt it because my budgets are super tight.’ So you'll have to have a payment system that says, enable it and encourage it where it's valuable and not where it's not valuable.

Probably the worst thing to do is to just pay every time the AI technology is used, 'cause that's not how the doctors are paying for it, that's not how they think about it. So for example, if you're buying a specific machine, sometimes you're going to bundle it, it's not a special machine, but it's like an image and it overlays the CT scanner. So then you just say, ‘I'm just paying whenever you do a CT.’ TV, which is similar in many ways, because the cost of doing it one more time, the cost of one more person watching a show is basically zero. We've moved away from paying each time you watch a show, which isn't reflecting the cost, but to just the subscription model, just say, ‘Look, do it the way Netflix does.’ You pay a month and you use it based on whether you think it's valuable to you. And so the same thing with AI and medicine here. And then if the doc has a monthly fee, if it's big, then you have to help them pay the monthly fee. If it's not, you just bundle it in. You just say, ‘Look, I'm paying you monthly fees for caring for patients. So just assume that that's in there.’ The ideal thing is if you're paying people to do a better job, then the AI is valuable to them that they're going to pay you to have better outcomes with your patients.

Jennifer Dixon:

One question, is about your thoughts about whether AI will substitute for labour or whether it will just enhance and exceed our capacity to improve quality?

David Cutler:

We're not at the point yet where you need AI and no radiologists, so we're not at that point. Maybe in some future we will be, but generally with almost every technological advance in medicine, it's increased the need to have trained clinicians.

Jennifer Dixon:

Are you hopeful?

David Cutler:

I am hopeful. In part, it's more fun to be hopeful than not to be hopeful. I am hopeful, because we're trying to do this before things have settled in a lot, and so that gives us a lot of running room for how to do things. So it's not like we're trying to change an entrenched system to do something different than what it was doing.

Jennifer Dixon:

Thank you so much.

David Cutler:

My pleasure.

Jennifer Dixon:

One big question is, who AI agents serve and therefore what should be the priorities as AI develops? So in health care, surely AI should serve the patient first and foremost, but it's also recognised that there are other legitimate interests. For example, AI should serve the wider public, the payers of care like the NHS. And as Alastair Denniston put it-

Alastair Denniston:

While it's perhaps a no-brainer to say, these need to serve interests of patients, there was also recognition, whilst that is front and centre, we also would have to consider the wider societal impacts and the sustainability of these bubbles.

Jennifer Dixon:

Faced with low growth as many developed countries are, they are really looking for the next big hope to grow their economies by which we can then afford the NHS and so therefore, a lot of discussion about technology and innovation is framed in a business type of way, a growth type of way, life sciences in particular, and so on. And yet, do you think that pull of gravity towards business is going to be a challenge when actually public objectives may run in the other direction? In other words AI development for business may not be good for AI development for sustaining the public realm. And some of the things we hold dear like the National Health Service?

Alastair Denniston:

I think it's also important to recognise almost that that's okay. That it's not necessarily that companies and big tech are sort of trying to drive in a really negative way, they're not. I think it's just recognising that actually our priorities may not always align with current financial structures, you made that point really, really eloquently, Jennifer, I think. I think so then we have to think, ‘Okay, well what are the incentives? What are the structures? What are the reward mechanisms that we...’ And that probably is now to the government, what are the structures that we could put in place that prioritise the things that the standard financial models, standard profit won't drive in that tech sector? And whether that's like big tech or whether that’s spin out.

And so I think things that I would call out there, one is safety by design. I think we can’t just regard safety as an optional extra. I think there is a tendency to try and to accelerate things to market and then see what happens. And that's fine. That move fast and break things is fine when it's a consumer tech, which is not safety critical, but actually we're in a safety critical context.

It's not quite like nuclear but one it’s making sure that actually there are robust legal frameworks around that said priority and safety, so it's safety in the design. I think something else we've worked a lot with, and again, I'd call out a great work on Standing Together led by University of Birmingham, the Standing Together programme is and making sure that the AI health technology we build are on inclusive, diverse data sets, wherever that's possible and working towards that. But actually, again, highlighting this point about transparency. We want to get to better datasets for tomorrow, but actually understanding the data we have today, know like how much you can trust it to work across the breadth of the rich diversity of our population.

So safety, first of all, safety for everybody, as second point. And then I think it comes back to this point you were making earlier a bit, about about prioritisation. And so how can we more effectively signal to those companies, and again, this is the role for governments: yes, you can bring it to us for our consideration, any technology you think is useful, but actually this is our top five priority list.

Jennifer Dixon:

Yeah. And those are things I suspect also that priority criteria could include where there are labour shortages, such as primary care, where there are long waits, appalling waits such as mental health and those services and so on and so forth.

And as others pointed out, AI might actually help address inequality if the alternative is no access to care at all. So one interesting area for discussion is, who is accountable legally for care if an AI agent is involved in care? Now here, there's early developing case law and it seems likely that in particular if generative AI was used, then liability might appropriately be shared between the responsible clinician and the underpinning commercial model of AI being used. So all of this is very, very early in its development across the world, and the legal position will surely become clearer as we go on.

While there are safety worries about machine learning applications, there is particular concern about generative AI, the self-learning models that are generating new data and where the results might be generated in a black box that no one can see into. And anyone using Chat GPT-4 may have discovered how it can make up data and ‘hallucinate’ as it's described.

I interviewed Peter Lee, Microsoft's Head of Research. He welcomed better regulation for these and other models as a result and thought the main commercial technologies companies would agree. Intriguingly, he thought that we might be only a few short years away from a situation where a few larger technology companies might make available basic foundational models from which health care providers might take and build bespoke applications internally based on their own data, and then possibly in the NHS make them available to other providers through open source. This way providers might set their own priorities for development rather than having a random array of offers from a wide range of tech companies that come to their door. The foundational models would be developed over time and possibly available on a subscription model like Netflix to health care providers, a bit like David Cutler was suggesting. But of course, one key question will be the price and procurement of these models if they come to pass. And in particular, can the NHS use its huge monopoly power to get that price down.

On regulation of AI more generally, across the world, according to Air Street's annual global analysis of AI developments, there are broadly three approaches that are developing across the world. At the one end is the more permissive, which is to rely on guidance and monitoring and frameworks and a reduced use of legislation at the moment. And the US are taking this approach along with an extent the UK, at least until we understand a bit more about the field. So in the middle, the second group is a lot more of a statutory approach via legislation based on defined levels of risk. So the EU is taking this approach as proposed in the EU AI Act, which is to be voted on shortly. And the other end of the spectrum is really banning outright certain developments as in the case of China and some other countries.

So most countries are looking at this, and I think our approach in the UK seems to be, as I say, more permissive, which means in health care there is no central government regulator or extra legislation at this point. And instead there's going to be reliance on the sector regulators, NICE, MHRA, the Health Research Authority, HRA and CQC, developing guidance and monitoring as we go along within the existing legislation such as Humans Right Act and the Equality Act. And there may be more legislation down the track when we understand what's happening.

So we'll watch and learn to see internationally how these different approaches work out, but no one type is right. The question is how far we can learn from each other and put in the right guardrails as the field develops.

So lastly, the NHS itself is getting up to speed with AI, for example, through the NHS AI Lab and trying to develop and link the NHS data infrastructure needed to develop and test applications that we're going to need. There is progress, but there's also another hard reality. As one AI specialist working at a top London teaching hospital recently told me, that a highly accurate AI application had been developed to spot pulmonary embolism in patients in its early stages. The cost was £60,000 a year to maintain, and his finance director at one of our largest trusts in the country rejected it on affordability even though it would clearly save lives. So that's the reality of the NHS right now.

And overall across the health care landscape, including the regulators I mentioned, the rural colleges, the universities who train clinical staff, there really is an enormous amount of coordinated thinking and change needed to respond to these opportunities. We're in the early stages, but the prize may well be worth it. But the key question on the table now is, given the state we're in, can we really do what it takes to realise these benefits?

So on that note, we must leave it there for now. Thank you to my guests Effy, David, Alastair, Peter and Ashish. There's more information as ever in the show notes to get you deeper into the field. And it is surely something we're going to return to in future.

Next month we'll be doing the Christmas roundup of the choicest highlights from our podcast this year. Join me in late December as you reach for the mince pies and we will mull over it all. And thanks go to Sean and Leo at the Health Foundation, to Paddy and team at Malt Productions for all their help. And it's bye from me, Jennifer Dixon.

Subscribe

Subscribe to our podcast on your preferred platform to receive future episodes when they’re released.

Related content

You might also like...

Podcast episode
38.00
Podcast length
32.00
Kjell-bubble-diagramArtboard 101 copy

Get social

Follow us on Twitter
Kjell-bubble-diagramArtboard 101

Work with us

We look for talented and passionate individuals as everyone at the Health Foundation has an important role to play.

View current vacancies
Artboard 101 copy 2

The Q community

Q is an initiative connecting people with improvement expertise across the UK.

Find out more