Rapid Responses to:

RESEARCH:
Stephen Westaby, Nicholas Archer, Nicola Manning, Satish Adwani, Catherine Grebenik, Oliver Ormerod, Ravi Pillai, and Neil Wilson
Comparison of hospital episode statistics and central cardiac audit database in public reporting of congenital heart surgery mortality
BMJ 2007; 335: 759 [Abstract] [Full text]
*Rapid Responses: Submit a response to this article

Rapid Responses published:

[Read Rapid Response] Surgical Mortality and the Media
Stephen Westaby, Nicholas Archer, Neil Wilson   (21 September 2007)
[Read Rapid Response] The scandal of poor quality data
stephen black   (26 September 2007)
[Read Rapid Response] Comparison of hospital episode statistics and central cardiac audit database
Paul Aylin, Brian Jarman, Alex Bottle   (27 September 2007)
[Read Rapid Response] Public reporting of Outcomes in Surgery: Time to reflect on Bristol?
Ashok I Handa   (11 October 2007)
[Read Rapid Response] Hospital Episode Statistics (HES) as a Tool of Profiling Surgical Outcome in England: Imperfect but Indispensable
Muhammad F Dawwas   (17 October 2007)
[Read Rapid Response] Routine data sets not shown to be useless
Stephen Duckett   (17 October 2007)
[Read Rapid Response] Using Routine Data
Helen Thornton-Jones   (19 October 2007)

Surgical Mortality and the Media 21 September 2007
 Next Rapid Response Top
Stephen Westaby,
Consultant Cardiac Surgeon
John Radcliffe Hospital, Headington, Oxford OX3 9DU,
Nicholas Archer, Neil Wilson

Send response to journal:
Re: Surgical Mortality and the Media

In this week’s British Medical Journal1 we express serious reservations about the publication of non-risk stratified cardiac surgical mortality statistics. Our concern is the predictable effect of adverse publicity on the surgical teams, on bereaved parents, and on those with children about to be treated at targeted centres. We support the collection of data to maintain clinical standards, but the rationale for public disclosure needs justification. The “Bristol factor” alone is insufficient now that it is clear that Hospital Episode Statistics are unreliable for measuring mortality and therefore unsuitable for comparison between centres.

In rankings of centres for surgical mortality 50% of units must be below the mean. Unfortunately in the lay press, “below average” is often interpreted as inadequate or incompetent. Around the time of acceptance of our paper, the NHS Information Centre published non-risk stratified death rates collected by the UK Congenital Cardiac Audit Database (CCAD).

Although this is a more accurate source of mortality information the statistics are detailed and difficult to understand for the lay reader. On this occasion the Scottish Press decided to castigate the centre in Glasgow, whose overall survival rate was 95.9% against a UK average of 96.7%. Some of the statements reported beneath the headline “Fears over child surgery deaths” 2 included:

• “Death rates for children’s heart operations are significantly higher than the rest of the UK” (untrue)
• “This is totally unacceptable and I am very concerned………. The Hospital might be happy with its figures but I am not”………. (Quote from the Chairperson of the Scotland Patient’s Association in “Scotland on Sunday”2)

The attack was covered by a local television company, which caused a loss of confidence in a thoroughly reputable unit. This was followed by a rebuttal of the media’s actions by the CCAD and President of the Society for Cardiothoracic Surgery. Nevertheless damage had been done. The lay press provide their own interpretation of data in order to present a sensational headline.

What was achieved by public reporting? The answer is simple. Talented, hard working and dedicated healthcare professionals were inappropriately forced to defend their practice. Prospective patients and their relatives were filled with unnecessary anxiety. Cardiac surgery is the only specialty under such intense public scrutiny in the UK and the risk of unjust condemnation constitutes a third and unwelcome party in the consulting room.

References:

1. Westaby S, Archer N, Manning N et al. Comparison of Hospital Episode Statistics and the Central Cardiac Audit Database in the public reporting of congenital heart surgery mortality. BMJ 2007: in press.

2. Scotland on Sunday Newspaper 10 June 2007

Competing interests: None declared

The scandal of poor quality data 26 September 2007
Previous Rapid Response Next Rapid Response Top
stephen black,
management consultant
london sw1w 9sr

Send response to journal:
Re: The scandal of poor quality data

The analysis presented by Westaby et. al. makes a good case that we shouldn't publish unreliable HES-based outcome data. But it misses what ought to be two critically important questions.

The first is about how statistical information is presented. Standard practice gives information to the public and to experts in a form almost guaranteed to mislead. This leads to the erroneous conclusion that members of the public can't be trusted to interpret complex statistics: but they can when they are framed and presented in the right way (see Gerd Gigerenzer's book "Reckoning with risk" for examples of how to do it right and how even medical experts give profoundly misleading advice when evidence is given to them in convenional statistical terminology). The remedy is to frame the information so the public can interpret it not to withold it so they can't.

But the other question is why don't we have better quality information in the first place? Surely even the most motivated medic would want to back up her intuition that she always did the best for her patients by actually collecting reliable statistics about the results of her interventions? Yet there is widespread indifference to the quality of recorded data and few specialites have alternatives to HES to allow independent checking. Some of the resistance comes from fear of bureaucracy, some from the perception that medicine is not like industry where few firms ignore the need for realiable performance data and the process of continuous improvement.

But we know--where we can get the data--that the quality of care varies a great deal across hospitals and consultants. And we also know that, when the right data is collected, analysed and published, big improvements are possible (for example, in the last decade American military medics have halved the death rate from battlefield injuries by being rigorous about collecting and analysing data).

Medical motivation and the public service ethos of NHS doctors do not guarantee us reliable high quality care. It is the responsibility of the medical profession--not the bureaucracy--to ensure that reliable data about performance is collected and analysed. Then they will be able to prove that the care they give is consistently good and improving. It is a scandal that in most specialties they can't do this and have to fall back on the line "trust me I'm a doctor".

Competing interests: None declared

Comparison of hospital episode statistics and central cardiac audit database 27 September 2007
Previous Rapid Response Next Rapid Response Top
Paul Aylin,
Clinical Senior Lecturer in Epidemiology and Public Health
Dr Foster Unit at Imperial College, Department of Primary Care and Social Medicine, London SW7 2AZ,
Brian Jarman, Alex Bottle

Send response to journal:
Re: Comparison of hospital episode statistics and central cardiac audit database

We read with interest the paper[1] by the cardiac clinicians from the Oxford Radcliffe Hospital NHS Trust. We published the follow-up of the Bristol Royal Infirmary analysis to which the authors refer[2] and would like to comment on the Oxford paper.

1. “The clinical teams did not verify the data.” We consider that we did give the clinical team at Oxford ample opportunity to comment on our results. We wrote two letters to an author of the Oxford paper giving details of our results, the first of which was sent over a year before publication. After some months, we received a letter from the Medical Director of the trust which did not dispute our figures and also confirmed that the trust had become aware of a downturn in their results with respect to transposition of the great arteries (TGA) before 2000, and that no corrective surgery for TGA was performed in Oxford after May 2000 for this reason. With the Medical Director’s agreement, we included a statement in our paper suggesting that Oxford’s own data had shown a fall in mortality since 2000.

2. That our paper “drew damaging conclusions”. In our paper we discussed problems with data quality, and also suggested that differences in case mix may be an explanation for the Oxford results. Our conclusions were that “Mortality at the Bristol Royal Infirmary has fallen markedly after the changes there, and a more gradual reduction in national mortality is evident from the time these data were first available. Improved quality of care may account for the decrease in mortality, through new technologies or improved perioperative and post-operative care, or both. Whatever the reasons for the reduction in mortality, this seems to be good news for patients and parents.”[2]

3. “HES recorded fewer cases than the central cardiac audit database.” We previously made clear the limitations of using OPCS4 codes in defining open operations,[3] in that there is no explicit code for open heart surgery. We used a definition arrived at by consultation with a paediatric cardiologist, a paediatric cardiac surgeon and a national coding expert for the Bristol Inquiry.[3] Because this definition differs from that used within CCAD there will inevitably be differences in numbers. The Thames Valley Strategic Health Authority Report came to the same conclusion, “the data provided by HES and the CCAD could not be directly compared” due in part to the fact “that the procedure codes used by the two different datasets were not interchangeable or able to be cross-referenced.”[4]

4. The statistics quoted in the Oxford paper differ significantly from the mortality figures quoted to us by the Medical Director of the trust. There are also differences in comparison with the official CCAD figures published on the Congenital Heart Disease website.[5]

5. Westaby et al. declare no competing interests. Westaby et al. describe their paper as an external review, yet all the authors are from Oxford. The original Thames Valley SHA report,[4] on which the Oxford paper is based, concluded that “the data provided by HES and the CCAD could not be directly compared”, yet the Oxford paper does just that.

We agree with the Thames Valley SHA report that “HES and the CCAD both have an important role to play in the measurement of activity and outcomes in the clinical setting.” We also agree that the CCAD could potentially provide an alternative and improved data source for paediatric cardiac surgery outcomes. However, CCAD data were only published for a single year of activity prior to our publication, and so could not be used to examine historical trends. We also note that the way CCAD data have been presented to date makes meaningful comparisons impractical.[6,5] For example, if one wants to compare mortality by centre for open procedures within the age group cited in the Oxford paper, one would have to examine 1,482 pages on the Congenital Heart Disease website. A commentator has said of the published figures, “’You can't compare apples and oranges’ is the usual defence for creating ever smaller subsets, but this data set is cut so fine that it's more like fruit salad.”[7]

Ultimately we concur with Bruce Keogh, that in time, clinical and administrative datasets should function as one and that with the advent of performance monitoring and payment by results, all clinicians must be prepared to take an active part in institutional data collection.[8]

References

[1] Westaby S, Archer N, Manning N et al. Comparison of Hospital Episode Statistics and the Central Cardiac Audit Database in the public reporting of congenital heart surgery mortality. BMJ 2007: online version.

[2] Aylin P, Bottle A, Jarman B, Elliot P. Paediatric cardiac surgical mortality in England after Bristol: descriptive analysis of hospital episode statistics1991-2002. BMJ 2004;329:825-9.

[3] Aylin P, Alves B, Cook A, Bennett J, Bottle A, Best N, Catena B, Elliott P. Analysis of hospital episode statistics for the Bristol Royal Infirmary inquiry. London: Division Primary Care and Population Health Sciences, Imperial College London, 1999. www.bristol-inquiry.org.uk/Documents/hes_(Aylin).pdf (accessed 21 Sep 2007).

[4] Thames Valley Strategic Health Authority, Oxford Radcliffe Hospitals NHS Trust Paediatric Cardiac Surgery Steering Group. Report of the paediatric cardiac surgery steering group. 2005.

[5] Congenital Heart Disease Website. The Information Centre. http://www.ccad.org.uk/congenital (accessed 21 Sep 2007.)

[6] Gibbs J, Monro JL, Cunningham D, Rickards A. Survival after surgery or therapeutic catheterisation for congenital heart disease in children in the United Kingdom: analysis of the central cardiac audit database for 2000-1. BMJ 2004;328: 611-5

[7] Treasure, T. Congenital heart disease. Monitoring interventions after Bristol. BMJ. 2004,328:594–595

[8] Keogh B. Surgery for congenital heart conditions in Oxford. BMJ 2005 330: 319-320

Competing interests: PA, BJ and AB are employed by Imperial College and work within the Dr Foster Unit at Imperial. The Dr Foster Unit at Imperial is funded by a research grant from Dr Foster Intelligence (an independent health service research organisation).

Public reporting of Outcomes in Surgery: Time to reflect on Bristol? 11 October 2007
Previous Rapid Response Next Rapid Response Top
Ashok I Handa,
Consultant Vascular Surgeon
Nuffield Department of Surgery, John Radcliffe Hospital, Oxford OX3 9DU

Send response to journal:
Re: Public reporting of Outcomes in Surgery: Time to reflect on Bristol?

I read with interest the report by Westaby and colleagues comparing administrative collected hospital episode statistics (HES) reported by Ayling and the clinically collected central cardiac audit database (CCAD).

This highlights the inaccuracies between HES data collected by poorly paid hospital coders working from poorly kept, and often illegible, case records and clinically collected data by dedicated data managers in the 13 cardiac centres with annual external validation.

I agree with Black (Rapid Response) that all surgical units should prospectively collect activity and outcome data. Clinicians should insist on and hospital managers should provide adequate administrative support for this to be a matter of routine. This would be good for patients as it would allow accurate public reporting of each units performance and avoid future such controversy.

On reflection on Bristol one wonders if clinically robust data such as CCAD had been available at the time, whether the GMC rulings on Dhasmana and Wisheart would have been the same. Having worked for them as an SHO in Bristol in the late 1980's, I did not doubt their commitment and dedication to their patients.

The cardiac surgical community to their credit have responded to Bristol with routine collection of clinically acquired Data for national reporting. Vascular surgeons are now also responding with the National Vascular Database organised by the Vascular Society. Unfortunately this is largely unfunded and unsupported by NHS managers.

Competing interests: None declared

Hospital Episode Statistics (HES) as a Tool of Profiling Surgical Outcome in England: Imperfect but Indispensable 17 October 2007
Previous Rapid Response Next Rapid Response Top
Muhammad F Dawwas,
Specialist Registrar
Liver Transplant Unit, Box 210, Addenbrooke's Hospital, Hills Road, Cambridge CB2 2QQ

Send response to journal:
Re: Hospital Episode Statistics (HES) as a Tool of Profiling Surgical Outcome in England: Imperfect but Indispensable

The scathing criticism levelled by Westaby and colleagues against the HES database and those who use it to profile surgeon performance(1) should not undermine confidence in what is the only mandatory database of NHS hospital activity in England. Although it is undoubtedly true that HES underestimates 30-day mortality, this is neither a new finding(2) nor does it necessarily invalidate its utility in monitoring surgical outcome, given the current feasibility of, and high accuracy afforded by, linking HES to national mortality records(2,3). While the UK Central Cardiac Audit Database should certainly continue to underpin national comparisons of paediatric cardiac surgery outcomes, the unavailability of equivalent databases in many other NHS disciplines makes HES indispensable, at least for the foreseeable future.

Furthermore, Westaby’s data does not necessarily disprove Aylin’s finding that Oxford paediatric surgical mortality was discrepant with the national average(4), given the largely non-overlapping time frames of the two studies(1,4). This contention is further supported by the acknowledgement of Oxford’s own Medical Director of his centre’s apparently inferior outcome prior to 2000(4,5).

It is noteworthy that the statistical evidence leading to the identification of Bristol’s outlier status in the 1990s, a finding which Westaby et al presumably do not dispute, was largely based on HES-derived analyses. For Oxford, as was the case with Bristol, “the crucial issue is not whether HES precisely measures activity and outcome, but the extent to which feasible data inconsistencies could explain any observed divergent performance”(2).

In conclusion, while the study by Westaby and colleagues does highlight important limitations of HES, it neither disproves the possibility of their unit’s past outcomes being divergent nor disqualify HES as an invaluable resource for research and audit in today’s NHS. The response of a healthcare provider to an unfavourable, arguably unjust, message should never come at the expense of discrediting the messenger.

REFERENCES

[1] Westaby S, Archer N, Manning N, Adwani S, Grebenik C, Ormerod O, Pillai R, Wilson N.. Comparison of Hospital Episode Statistics and the Central Cardiac Audit Database in the public reporting of congenital heart surgery mortality. BMJ. 2007; 335:759.

[2] Spiegelhalter DJ, Evans S, Aylin P, Murray G. Overview of statistical evidence presented to the Bristol Royal Infirmary inquiry concerning the nature and outcomes of paediatric cardiac surgical services at Bristol relative to other specialist centres from 1984 to 1995. September 2000.http://www.bristolinquiry.org.uk/final_report/annex_b/images/Spiegelhalteretal_O_statev1.pdf (accessed on 14 Oct 2007).

[3] Lakhani A, Coles J, Eayres D, Spence C, Rachet B. Creative use of existing clinical and health outcomes data to assess NHS performance in England: Part 1--performance indicators closely linked to clinical care. BMJ. 2005;330:1426-31.

[4] Aylin P, Bottle A, Jarman B, Elliot P. Paediatric cardiac surgical mortality in England after Bristol: descriptive analysis of hospital episode statistics1991-2002. BMJ 2004;329:825-9.

[5] Aylin P, Jarman B, Bottle A. Comparison of hospital episode statistics and central cardiac audit database. http://www.bmj.com/cgi/eletters/bmj.39318.644549.AEv1 (accessed on 14 Oct 2007).

Competing interests: None declared

Routine data sets not shown to be useless 17 October 2007
Previous Rapid Response Next Rapid Response Top
Stephen Duckett,
Adjunct Professor, School of Population Health
University of Queensland

Send response to journal:
Re: Routine data sets not shown to be useless

So what did Westaby and his colleagues actually show?

1. Using a broader definition of deaths (30-day in-hospital vs 30-day any location) you capture more deaths. This does not by itself invalidate the use of routine data: the response should be that case fatality studies should use the broader definition (i.e. link the death register data and the routine hospital data).

2. Coding is better in the Registry data set than the routine data set and that with better coding (and, possibly, more variables with which one can adjust for risk), you get better data capture, a point acknowledged in Aylin's original paper. If more specific codes are important (and Westaby et al demonstrates they are), this suggests adjusting the procedure code definitions in the routine data set rather than abandoning use of the data set altogether.

Westaby et al's conclusions though are important beyond the cardiac registry and give cause for a rethink about all registries. Why are registries separate from the routine data sets? It would be relatively easy to link the multiple data holdings to ensure that the routine data set contains, for these specific subsets of patients, all the information about these patients. This would enhance use of both data sources (for the registries a single patient might be in more than one registry). The registries would then be conceptualised not as special separate clinician data sets but rather as separate modules of the routine data set, giving more power and use to the routine data, leveraging the substantial investment that has been made in this data collection.

Note that in this response I have not used the term 'administrative data': information recorded by clinicians (such as diagnosis) is included in the routine data set. I, for one, would be unhappy about my diagnosis being made by clerks rather than medical practitioners!

Stephen Duckett
Stephen_Duckett@health.qld.gov.au

Competing interests: The author uses routine data in his research and as part of his work role.

Using Routine Data 19 October 2007
Previous Rapid Response  Top
Helen Thornton-Jones,
Senior Lecturer
University of Hull HU6 7RX

Send response to journal:
Re: Using Routine Data

Editor,

I have read the debate around this paper with mixed feelings i.e. some interest, a degree of despair and a certain amount of amusement.

Having had a long career in NHS information I consider myself to be a “recovering statistician”.

I have seen much time wasted as a result of well-intentioned people using data for purposes other than those for which they were collected and highlighting differences that on close investigation turn out not to be indicators of poor care, although they have been held up as such.

Routinely-collected data such as the HES, has its uses in that as a source of early investigation of an issue, it can be fast, cheap and very powerful. However people who use it without being aware of how and why it was created will always be at risk of unjustified scaremongering. “Special” datasets e.g. specialised disease registers, can be misused equally badly.

I find it helpful when considering an issue such as the apparent high rate of mortality following cardiac surgery in Oxford to consider the 3 “I”s . That is to systematically investigate whether a high rate is a result of genuine high INCIDENCE, is it an artefact of INFORMATION (e.g. coding issues) or has there been some planned INTERVENTION (such as a tendency to refer high risk cases to Oxford)?

Usually the answer cannot be derived from the data alone, but there are often valuable insights to be gained from the people who created it or have local insights into it, rather than from those who merely process it. Unfortunately data processors such as Doctor Foster are too far removed from the source data to be able to access this sort of insight and to make real sense of the differences that they highlight.

It would be interesting to study how much real benefit can be attributed to the generation of routine indicators as opposed to how much time has been wasted in trying to explain them

Competing interests: None declared