England could become a world leader in providing high-quality, affordable health care by drawing on a rich history of policies aimed at improving quality. This was a clear message from our recent publication A clear road ahead. As we highlighted there, many policies have been developed and implemented, but relatively few have been evaluated to help guide future decisions.

One of the perennial problems of doing evaluations is that changes are often introduced as a 'big bang' – across all areas at the same time. As a result, there is no obvious control group against which to assess the impact of the change.

But what if analytical approaches could be developed to deal with this problem?

Named accountable GPs were introduced across all general practices in England from April 2014. This responded to concerns that patients with more complex needs could benefit from improved continuity of care. In the data analytics team, we have been looking at the impact of this policy and our findings have just been published in BMJ Open.

On first view, the policy appeared near impossible to evaluate. There was no obvious control group (another 'big bang' implementation), and no obvious data set to use. But we were able to make progress because of two quirks of the GP contract.

First, the contract required all GPs to record, in the electronic medical records, when patients were assigned a named accountable GP. Some GPs voluntarily contribute to a research database called CPRD. As a result, we were able to access de-identified data from 200 practices (255,469 patients).

Second, the contract was very clear on who should be assigned a named GP – those over 75 years old. This meant we could use a little known evaluation technique called 'regression discontinuity'.

Here's how it works...

Perhaps you have recently had a birthday, and on your birthday someone asked you what it feels like to be 30, or 40, or 75, or whatever age. And perhaps you thought to yourself: 'no different from yesterday'. This is the insight that regression discontinuity uses.

We argued that patients aged just above 75 are not that different from those aged just below 75. It is true that older people do tend to have higher health care needs than younger people, but health needs do not usually change overnight when somebody has a birthday.  

The contract meant there was a big difference in the proportion of patients who had a named GP at age 75 (increasing from 4% to 80%). So we could attribute any difference in outcomes at age 75 to the impact of having a named GP (subject to a few adjustments that we describe in the paper).

We were interested in whether having a named accountable GP improved continuity of care. To measure this continuity, we looked at all of the GP contacts that a patient had following allocation to a named GP, and calculated the proportion of these that were with the most commonly seen GP. So for example, if a patient had five contacts, three of which were with a single GP, then the usual provider of care index (as it's called) is 0.6. We reasoned that, if the named accountable GPs improved continuity, then this index would shift upwards.

The paper describes the findings in more detail, but essentially we saw no change in the usual provider of care index at age 75. So despite having a named accountable GP, patients were no more likely to see the same GP when they attended than they would have been had the policy not been introduced.

We also looked at how often patients saw a GP (since we thought the policy might stimulate extra demand from patients) and how many times patients were referred to specialist care or underwent common diagnostic tests (since we thought the policy might have led to the identification of unmet health needs). No changes were found.

We did the analysis in line with a pre-specified protocol.

Clearly the study leaves a lot of questions unanswered. Overall the initiative did not affect the metrics described above, but we couldn't examine other outcomes (including patient satisfaction or hospital admissions). It's possible that the change was more pronounced within some general practices than others. Finally, it's also possible that nine months was too short a time to assess the impact of the change. For example, it might take longer than nine months for patients to become aware of their entitlement to a named GP, and to be more assertive about asking to see that GP when making appointments.

So what can we take away from the study?

One of the exciting aspects of this study is that we measured continuity of care in general practice using existing data. Metrics might now be developed to guide efforts to improve the continuity of health care. We know from previous studies that continuity of care is valued by doctors and patients alike, and there are some indications of a link with patient outcomes.

Even if this particular initiative did not improve continuity of care in the first nine months, that doesn't mean that other approaches shouldn't be tried. A number of ideas have been suggested, for example changing receptionist behaviour, or organising large practices into small teams of doctors that can see each others' patients when one is away. Although further evaluation would be required, these approaches may be worth exploring.

Meanwhile, these analytical methods have huge potential to inform decisions that are taken both nationally and locally to improve care. However, while the NHS is home to many promising initiatives, often it does not have the time or resources to invest in analytical methods of evaluation. At the Health Foundation, we are increasingly collaborating with NHS teams in this area. Watch this space.

Adam is Director of Data Analytics at the Health Foundation.

Add new comment

* indicates a required field

Your email address will not be published on the site and will only be used if we need to contact you about your comment.

View our comments policy