Artigo Acesso aberto Revisado por pares

The Ethics of Transparency: Publication of Cardiothoracic Surgical Outcomes in the Lay Press

2009; Elsevier BV; Volume: 87; Issue: 3 Linguagem: Inglês

10.1016/j.athoracsur.2008.12.043

ISSN

1552-6259

Autores

Jeffrey P. Jacobs, Robert J. Cerfolio, Robert M. Sade,

Tópico(s)

Healthcare Policy and Management

Resumo

Dr Jacobs discloses a financial relationship with CardioAccess.Cardiothoracic surgical (CTS) outcomes have been published in the lay press for nearly 2 decades. Pressures to expand such publication come from many different areas and cannot be resisted indefinitely. Evidence exists that contemporary reporting of outcomes data is based on flawed methodologies that potentially mislead and deceive. Such deceptions may harm patients, surgeons, and hospitals in various ways, and could undermine quality of surgical care and patients' access to it. Yet, public reporting of outcomes can also be beneficial to all concerned, but only if relevant data are accurate and the formats in which they are reported are valid and easily understood.In this essay, we review the early history of public reporting of CTS outcomes, discuss potentially negative aspects of public reporting, and suggest solutions to these problems. We then consider the positive aspects of public reporting and provide recommendations for the future. We conclude that CTS data should be collected and analyzed under the direction of professional medical societies and that reports of outcomes based on such data should be published in the lay press.The History of Public ReportingIn December 1990, New York state officials publicly released hospital-specific data on raw as well as risk-adjusted mortality for patients who underwent coronary artery bypass grafting (CABG). In January 1992, specific mortality figures for individual surgeons as well as for hospitals were reported. In November 1992, Pennsylvania followed suit [1Fung C.H. Lim Y.W. Mattke S. Damberg C. Shekelle P.G. Systematic review: the evidence that publishing patient care performance data improves quality of care.Ann Intern Med. 2008; 148: 111-123Crossref PubMed Scopus (672) Google Scholar]. From the outset, the methodology of New York's Cardiac Surgery Reporting System (CSRS) for adjusting risks and for comparing hospitals and surgeons has been intensely criticized. The prognostic accuracy of the CSRS model and whether it could adequately account for case–mix variations among surgeons has been challenged. Anecdotal reports suggest that some surgeons may have tried to avoid reporting adverse statistics by referring some of their sickest patients elsewhere. These issues brought into question the quality of the data that were being reported to the public [2Green J. Wintfeld N. Report cards on cardiac surgeons Assessing New York State's approach.N Engl J Med. 1995; 332: 1229-1232Crossref PubMed Scopus (310) Google Scholar].Despite these flaws, public reporting of CTS outcome data continues because of the belief that data on individual surgeons and hospitals are useful to several distinct groups: individual patients for choosing hospitals and surgeons, surgeons for improving their own outcomes and hospitals for quality improvement, and governmental entities for developing health-related laws and regulations. Critical analysis of public reporting, however, has undermined these beliefs.Problems with Public Reporting and Potential SolutionsNegative aspects of public reporting have been identified or alleged and have been used to disparage public reporting. They are related to complexity adjustment, incomplete or inaccurate data, reduced access to CTS by the sickest patients, statistics that are unsound because they do not account for limited sample size and are based on flawed administrative databases, and insufficiently trained journalists.Complexity AdjustmentThe data used in outcomes reports might not be of suitable quality because they are not accurately adjusted for the complexity of the cases and patients [3Lacour-Gayet F.G. Clarke D. Jacobs J.P. et al.The Aristotle score: a complexity-adjusted method to evaluate surgical results.Eur J Cardiothorac Surg. 2004; 25: 911-924Crossref PubMed Scopus (421) Google Scholar, 4Jacobs J.P. Lacour-Gayet F.G. Jacobs M.L. et al.Initial application in the STS congenital database of complexity adjustment to evaluate surgical case mix and results.Ann Thorac Surg. 2005; 79 (discussion 1635–49): 1635-1649Abstract Full Text Full Text PDF PubMed Scopus (91) Google Scholar]. Inaccurate public information can be misleading or deceptive and therefore dangerous. Unless the quality of the data is high, public reporting of outcomes for cardiothoracic surgeons is not only useless, but might be harmful to surgeons as well as patients. The harms will remain until accurate risk-stratified data are publicly available. A great deal of published information supports this assertion [1Fung C.H. Lim Y.W. Mattke S. Damberg C. Shekelle P.G. Systematic review: the evidence that publishing patient care performance data improves quality of care.Ann Intern Med. 2008; 148: 111-123Crossref PubMed Scopus (672) Google Scholar].No risk-adjusted model can take into account all of the complexities of individual patients, according to Dranove's review of the New York experience in 2003: It is essential for the analysts who create report cards to adjust health outcomes for differences in patients characteristic (risk adjustment)… . But analysts can adjust for only characteristics that they can observe … Because of the complexity of patient care, providers are likely to have better information on patient's conditions than even the most clinically detailed database. Thus, providers can improve their ranking by selecting patients on the basis of characteristics that are unobservable to the analyst but predictive of good outcomes [5Dranove D. Kessler D. McClellan M. Is more information better?.The effects of " report cards" on health J Polit Econ. 2003; 111: 555-588Crossref Scopus (392) Google Scholar].The Society of Thoracic Surgeons (STS) Database—comprising General Thoracic, Adult Cardiac, and STS Congenital Heart Surgery Databases—includes methodology to address this problem. It currently includes 1051 participating sites and 3202 participating surgeons. More than 100 publications in professional journals and textbooks have come from the STS Database, and it has been federally funded to study quality improvement [6Society of Thoracic SurgeonsWelcome to the STS National Database.http://www.sts.org/sections/stsnationaldatabase/Google Scholar].The STS Database is based on several fundamental principles that facilitate accurate comparison of outcomes and quality improvement of health care for patients undergoing cardiothoracic operations. These principles include standardized nomenclature to facilitate meaningful analysis of outcomes, a minimum database data set with precise and transparent definitions of data fields, accurate and verified mechanisms of adjustment for complexity of the patients, and data verification.CTS outcomes analysis requires case–mix adjustment, because case–mix varies among programs, and without such adjustment, many surgeons and programs caring for high-risk patients will be inappropriately praised or criticized. Case–mix adjustment could eliminate perverse incentives for surgeons to avoid caring for the highest-risk patients—those who might need an operation the most and who might benefit the most from the procedure.Case–mix adjustment can be variously accomplished by complexity stratification, as is currently done in the STS Congenital Database, or with formal risk modeling and risk-adjusted mortality, as is currently done in the STS Congenital Heart Surgery Databases. Table 1 compares these useful and acceptable techniques of case–mix adjustment [7O'Brien S.M. Complexity stratification vs. risk adjustment: what is the difference? Presented at: The Society of Thoracic Surgeons Advances in Quality & Outcomes Conference: Minneapolis, MN.Nov 1–3, 2007Google Scholar, 8O'Brien S.M. Z-Scores and confidence intervals Presented at: The Society of Thoracic Surgeons Advances in Quality & Outcomes Conference. Minneapolis, MN.Nov 1–3, 2007Google Scholar].Table 1Two Methods of Case–Mix Adjustment (or Risk-Adjustment) [7O'Brien S.M. Complexity stratification vs. risk adjustment: what is the difference? Presented at: The Society of Thoracic Surgeons Advances in Quality & Outcomes Conference: Minneapolis, MN.Nov 1–3, 2007Google Scholar]Risk-Adjusted Mortality RateComplexity StratificationThe mortality rate is adjusted for differences in the composition of the patient population at the hospital of interest and the comparison group. It is an estimate of what a given hospital's mortality rate would be if its case–mix were the same as the comparison group.A method of analysis in which the data are divided into relatively homogeneous groups (called strata). The data are analyzed within each stratum.Estimates a hypothetical quantityCompares actual observed mortality ratesUsually requires statistical modelingSimple, does not require a modelProduces a single summary measureProduces a separate summary measure for each stratumRequires assumptionsaAssumes that a hospital that does well with low-risk cases will do well with high-risk cases.No assumptionsUsed in the STS Adult Cardiac DatabaseUsed in the STS Congenital Heart Surgery DatabaseSTS = Society of Thoracic Surgeons.a Assumes that a hospital that does well with low-risk cases will do well with high-risk cases. Open table in a new tab No risk-adjusted model can take into account all of the complexities of individual patients. However, the risk modeling techniques used in the STS Database correlate well with reality, as demonstrated by several studies [9Jacobs M.L. Jacobs J.P. Jenkins K.J. Gauvreau K. Clarke D.R. Lacour-Gayet F.G. Stratification of complexity: the Risk Adjustment for Congenital Heart Surgery-1 method and the Aristotle Complexity Score–past, present, and future.Cardiol Young. 2008; 18: 163-168Crossref PubMed Scopus (43) Google Scholar, 10Shroyer A.L. Coombs L.P. Peterson E.D. et al.The Society of Thoracic Surgeons: 30-day operative mortality and morbidity risk models.Ann Thorac Surg. 2003; 75 (discussion 1864–5): 1856-1864Abstract Full Text Full Text PDF PubMed Scopus (507) Google Scholar, 11Welke K.F. Shen I. Ungerleider R.M. Current assessment of mortality rates in congenital cardiac surgery.Ann Thorac Surg. 2006; 82: 164-171Abstract Full Text Full Text PDF PubMed Scopus (83) Google Scholar, 12O'Brien S.M. Jacobs J.P. Clarke D.R. et al.Accuracy of the Aristotle Basic Complexity Score for classifying the mortality and morbidity potential of congenital heart surgery operations.Ann Thorac Surg. 2007; 84: 2027-2037Abstract Full Text Full Text PDF PubMed Scopus (84) Google Scholar]. The tools for complexity adjustment described in these studies clearly demonstrate that the STS Database uses appropriate methods to adjust properly for case–mix.Data VerificationThe data used in outcomes reports might not be of suitable quality because they are not complete or accurate [3Lacour-Gayet F.G. Clarke D. Jacobs J.P. et al.The Aristotle score: a complexity-adjusted method to evaluate surgical results.Eur J Cardiothorac Surg. 2004; 25: 911-924Crossref PubMed Scopus (421) Google Scholar, 4Jacobs J.P. Lacour-Gayet F.G. Jacobs M.L. et al.Initial application in the STS congenital database of complexity adjustment to evaluate surgical case mix and results.Ann Thorac Surg. 2005; 79 (discussion 1635–49): 1635-1649Abstract Full Text Full Text PDF PubMed Scopus (91) Google Scholar, 13Clarke D.R. Breen L.S. Jacobs M.L. et al.Data verification in congenital cardiac surgery.Cardiol Young. 2008; 18: 177-187Crossref PubMed Scopus (96) Google Scholar, 14Elfstrom J. Stubberod A. Troeng T. Patients not included in medical audit have a worse outcome than those included.Int J Qual Health Care. 1996; 8: 153-157Crossref PubMed Scopus (51) Google Scholar, 15Gibbs J.L. Monro J.L. Cunningham D. Rickards A. Survival after surgery or therapeutic catheterisation for congenital heart disease in children in the United Kingdom: analysis of the central cardiac audit database for 2000–1.BMJ. 2004; 328: 611Crossref PubMed Google Scholar, 16Maruszewski B. Lacour-Gayet F. Monro J.L. Keogh B.E. Tobota Z. Kansy A. An attempt at data verification in the European Association for Cardio-Thoracic Surgery Congenital Database.Eur J Cardiothorac Surg. 2005; 28: 400-404Crossref PubMed Scopus (34) Google Scholar]. By comparison, data in the STS Database are verified for completeness and accuracy in two ways: an intrinsic verification process designed to rectify inconsistencies of data and missing elements of data, and an on-site audit program that verifies data at its primary source. Data in the STS Adult Cardiac Surgery Database and the STS Congenital Heart Surgery Database are randomly audited as part of a formal on-site audit program conducted by an independent medical audit firm. These data verification efforts have demonstrated that the STS Database is a reliable source of complete and accurate data [13Clarke D.R. Breen L.S. Jacobs M.L. et al.Data verification in congenital cardiac surgery.Cardiol Young. 2008; 18: 177-187Crossref PubMed Scopus (96) Google Scholar].Reduced Access to Health CareAnecdotal reports have suggested that public reporting can lead to referral of high-risk patients elsewhere, creating an access problem for the sickest patients who might benefit the most from CTS [17Capps C.S. Dranove D. Greenstein S. Satterthwaite M. The silent majority fallacy of the Elzinga- Hogarty criteria: a critique and new approach to analyzing hospital mergers Working paper. National Bureau of Economic Research, Washington, DC2001Google Scholar]. As already noted, outcome data that lack accurate risk assessment and adjustment are useless, but they may be worse than useless—they may be dangerous. Providers caring for the sickest patients are inappropriately penalized by the absence of proper risk adjustment. Moreover, without complexity adjustment, providers have reason to avoid caring for high-risk, high-complexity patients, sending them elsewhere instead. Severely ill patients are more likely to travel to receive their care from centers of excellence [17Capps C.S. Dranove D. Greenstein S. Satterthwaite M. The silent majority fallacy of the Elzinga- Hogarty criteria: a critique and new approach to analyzing hospital mergers Working paper. National Bureau of Economic Research, Washington, DC2001Google Scholar]. Apparently, severely ill patients believe they have more to gain by traveling to busier, higher-quality centers. This relocation allows lower-quality centers to avoid high-risk patients, thus improving their apparent ranking. Similarly, Dranove showed that "there has been a shift towards operating on healthier patients" in lower-quality centers [17Capps C.S. Dranove D. Greenstein S. Satterthwaite M. The silent majority fallacy of the Elzinga- Hogarty criteria: a critique and new approach to analyzing hospital mergers Working paper. National Bureau of Economic Research, Washington, DC2001Google Scholar]. These lower-quality centers are less likely to offer surgical intervention to high-risk patients, even though these patients may benefit from it the most.According to Capps, the overall effect of the New York report cards was to "change the incidence from sicker patients toward healthier patents and lead to a higher cost … and a deterioration of outcomes, especially among ill patients. We therefore conclude that the report cards were welfare-reducing" [17Capps C.S. Dranove D. Greenstein S. Satterthwaite M. The silent majority fallacy of the Elzinga- Hogarty criteria: a critique and new approach to analyzing hospital mergers Working paper. National Bureau of Economic Research, Washington, DC2001Google Scholar]. One solution to this potential problem of reduced access to health care is the use of robust clinical data sets that adjust properly for complexity of the patients so CTS outcomes can be analyzed accurately.Sample Size and Random VariationThe limited sample size of any individual surgeon's or hospital's experience can lead to (1) wide fluctuations in outcomes from year to year and (2) reporting of outcomes as being different when they are actually statistically similar and appear to be different due only to chance rather than to true statistical differences. Without proper adjustments for sample size and random variation, a shift of one or two deaths a year can lead to a dramatic effect on the reported outcomes of a surgeon or a hospital. This effect has been clearly seen in both New York and Pennsylvania, where small sample size and low mortality rates led to wide swings in rankings from year to year, despite absences of demonstrable differences in quality of care. Failure to adjust properly for sample size and random variation can mislead and deceive.CTS outcomes have been publicly reported using league tables, which are fundamentally flawed, unnecessary, and inappropriate. They use outcome data to rank participants, with no adjustment for sample size or random variation. League tables are commonly used in sports to rank teams or individual athletes by unadjusted outcome data, usually wins and losses. League tables always have winners and losers, with someone on the top and someone on the bottom, even when no true difference exists between the subjects being ranked [18Spiegelhalter D.J. League tables.in: Armitage P. Colton T. Encyclopaedia of biostatistics. John Wiley and Sons, Chichester, UK2005: 2478-2751Google Scholar]. Figure 1 plots the risk-adjusted 30-day mortality rates after CABG in New York between 1997 and 1999 and shows the ranked rates for individual surgeons with 95% confidence intervals. In the original publication of these data, the surgeons were named [19New York State Department of HealthCoronary artery bypass surgery in New York State, 1997–9.http://www.health.state.ny.us/nysdoh/heart/heartdisease.htmGoogle Scholar]. The widths of the confidence intervals in Figure 1 show few intervals that do not overlap, revealing considerable uncertainty about the true underlying mortality rates. This uncertainty, however, is not reflected in the ranks of specific surgeons [18Spiegelhalter D.J. League tables.in: Armitage P. Colton T. Encyclopaedia of biostatistics. John Wiley and Sons, Chichester, UK2005: 2478-2751Google Scholar].Spiegelhalter has pointed out that if one thinks of the intervals in Figure 1 as expressing probability distributions for the true mortality rates, and one then samples those distributions and ranks each of the generated samples, a set of plausible "true ranks" will be created. As revealed in Figure 2, these ranks show substantial uncertainty. The intervals for most surgeons are very wide: only 2 of 175 can be confidently placed in the lowest mortality quartile and only 6 in the highest mortality quartile. Thus, "any 'league table' is largely spurious, apart from possibly identifying some extreme cases that can confidently be placed in, say, the top or bottom quarter" [18Spiegelhalter D.J. League tables.in: Armitage P. Colton T. Encyclopaedia of biostatistics. John Wiley and Sons, Chichester, UK2005: 2478-2751Google Scholar].Fig 2The median estimates and 95% intervals are shown for true ranks of 175 New York surgeons. If one thinks of the intervals in Fig 1 as expressing probability distributions for the true mortality rates, and one then samples those distributions and ranks each of the generated samples, a set of plausible "true ranks" for the surgeons is created, as shown in Fig 2. The true ranks in Fig 2 show substantial uncertainty, with most surgeons having a very wide interval (only 2 out of 175 can be confidently placed in the "best" quarter, and only 6 in the "worst" quarter). This figure was reproduced with permission from the following publication [18Spiegelhalter D.J. League tables.in: Armitage P. Colton T. Encyclopaedia of biostatistics. John Wiley and Sons, Chichester, UK2005: 2478-2751Google Scholar]: Spiegelhalter DJ. League tables. In: Armitage P, Colton T, eds. Encyclopaedia of Biostatistics. Chicester, U.K.: John Wiley and Sons; 2005:2478–751.View Large Image Figure ViewerDownload (PPT)Spiegelhalter addressed the problem of small sample size by describing the funnel plot, which plots mortality rates of institutions or surgeons on a graph in conjunction with 95% and 99% binomial confidence intervals centered around the average population mortality rate [20Spiegelhalter D.J. Funnel plots for comparing institutional performance.Stat Med. 2005; 24: 1185-1202Crossref PubMed Scopus (582) Google Scholar]. A funnel plot is a mechanism to identify outliers in performance without creating league tables. Figure 3 shows a funnel plot of the New York surgeons' data and demonstrates that the vast majority are not outliers. Spiegelhalter states, "The plot makes clear that there is no point in carrying out a ranking exercise on those in the 'funnel'."Fig 3A funnel plot demonstrates risk-adjusted 30-day mortality rates after coronary artery bypass grafting in New York between 1997 and 1999 for 175 New York surgeons. The solid horizontal line shows the average 30-day mortality rates for these surgeons. The two dotted lines show the 95% confidence intervals. The two dashed lines show the 99.9% confidence intervals. Each dot represents a surgeon. Only surgeons outside of the funnel are outliers. This figure demonstrates that that the vast majority are not outliers. As Spiegelhalter states, "The plot makes clear that there is no point in carrying out a ranking exercise on those in the 'funnel.'" This figure was reproduced with permission from the following publication [18Spiegelhalter D.J. League tables.in: Armitage P. Colton T. Encyclopaedia of biostatistics. John Wiley and Sons, Chichester, UK2005: 2478-2751Google Scholar]: Spiegelhalter DJ. League tables. In: Armitage P, Colton T, eds. Encyclopaedia of Biostatistics. Chicester, U.K.: John Wiley and Sons; 2005:2478–751.View Large Image Figure ViewerDownload (PPT)Funnel plots are used to publicly report outcome data in the United Kingdom Central Cardiac Audit Database (CCAD) [21Jacobs M.L. Jacobs J.P. Franklin R.C.G. et al.Databases for assessing the outcomes of the treatment of patients with congenital cardiac disease–the surgical perspective.Cardiol Young. 2008; 18: 101-115Crossref PubMed Scopus (50) Google Scholar]. Since 2007, the STS Congenital Heart Surgery Database Report has used similar techniques that allow identification of outliers without creating a league table that ranks programs without true differences [22Jacobs J.P. Jacobs M.L. Mavroudis C. Lacour-Gayet F.G. Tchervenkov C.I. Executive summary: the Society of Thoracic Surgeons congenital heart surgery database-seventh harvest- (2003–2006). The Society of Thoracic Surgeons and Duke Clinical Research Institute, Durham, NC2007Google Scholar, 23Jacobs J.P. Jacobs M.L. Mavroudis C. Lacour-Gayet F.G. Tchervenkov C.I. Executive summary: the Society of Thoracic Surgeons congenital heart surgery database- eighth harvest- (2004–2007). The Society of Thoracic Surgeons and Duke Clinical Research Institute, Durham, NC2008Google Scholar]. The funnel plot explicitly demonstrates the substantial random sampling variation that occurs at low volumes and the difficulty in distinguishing among levels of performance. As a consequence, surgeons and institutions with randomly high mortality rates are protected from inappropriate conclusions about their data. Patients and interested institutions are similarly protected from making erroneous decisions in favor of surgeons and institutions with randomly low mortality rates. This methodology clearly demonstrates the availability of statistical tools to account for both small sample size and random variation.Administrative Data and the Role of the GovernmentGovernment reports of CTS outcomes are based on severely flawed administrative databases. Furthermore, many nongovernmental organizations have advocated public reporting of CTS outcomes based on these flawed administrative data. The Consumers' Checkbook Web site (checkbook.org) uses administrative Medicare claims data to provide information about the number of procedures a particular surgeon performs for a specific type of operation. The Center for the Study of Services, which owns this Web site, has successfully sued the United States government to obtain Medicare claims data on all physicians participating in the Medicare program [24U.S. District Court for the District of ColumbiaConsumers' Checkbook, Center for the Study of Services v. U.S. Department of Health and Human Services, Civil action No. 06–2201 (EGS).http://www.checkbook.org/Press/doc/Court%20Opinion.pdfDate: August 22, 2007Google Scholar]. At this writing, the case is under appeal by the Department of Health and Human Services and the Department of Justice; chances for overturning the lower court's verdict are uncertain.If successful again, the Consumers' Checkbook organization would publish physician-level data from Medicare on its Web site. These ratings would reflect data in the Medicare database, thus offering only a fraction of the clinical experience of many surgeons. The Medicare database is largely restricted to patients aged 65 years or older, thus biasing the analysis and misleading those who use it. Few patients who visit checkbook.org are likely to be aware of this flaw or to understand its meaning, despite the Web site's attempt to explain it.For example, consider a patient—let's call him Joe Internet—who is searching for "the best" mitral valve surgeon to repair his 44-year-old wife's mitral valvar regurgitation. Joe and his wife are interested in valve repair instead of replacement. They visit the checkbook.org Web site and view the ranking of surgeons who perform the most mitral valve surgery in the United States. They are thrilled to tell their friends that they have identified the best and busiest surgeon. But have they? Or have they been misled?Because many mitral valve repair operations are performed on patients aged younger than 65, a large proportion of the procedures are absent from this analysis. The site has misled—or frankly deceived—Joe and his wife about who does the most mitral valve repairs. In addition to volume, checkbook.org also intends to list the costs of a particular operation by a particular surgeon. Yet, one of the few aspects of cost surgeons can partly control is length of stay, but it is not listed. Inaccurate or incomplete data are worse than no data at all.Government agencies cannot fully understand the complexities of medical and surgical outcomes analysis without the leadership of professional medical and surgical societies. We must help those agencies understand the best data sources and best reporting methodologies. Accurate reporting of CTS outcomes requires reliance on clinical databases rather than administrative databases. A recent study compared data on isolated CABG results from an audited and validated clinical registry with data derived from a contemporaneous state administrative database using the inclusion/exclusion criteria and risk model of the Agency for Healthcare Research and Quality [25Shahian D.M. Silverstein T. Lovett A.F. Wolf R.E. Normand S.L. Comparison of clinical and administrative data sources for hospital coronary artery bypass graft surgery report cards.Circulation. 2007; 115: 1518-1527Crossref PubMed Scopus (156) Google Scholar]. This study concluded, "Cardiac surgery report cards using administrative data are problematic compared with those derived from audited and validated clinical data, primarily because of case misclassification and nonstandardized end points."Three recent investigations compared coding of congenital cardiac disease from clinical databases with administrative databases that used the International Classification of Diseases (ICD) coding system. They demonstrated that the validity of coding congenitally malformed hearts using ICD is likely to be poor [26Cronk C.E. Malloy M.E. Pelech A.N. et al.Completeness of state administrative databases for surveillance of congenital heart disease.Birth Defects Res A Clin Mol Teratol. 2003; 67: 597-603Crossref PubMed Scopus (71) Google Scholar, 27Frohnert B.K. Lussky R.C. Alms M.A. Mendelsohn N.J. Symonik D.M. Falken M.C. Validity of hospital discharge data for identifying infants with cardiac defects.J Perinatol. 2005; 25: 737-742Crossref PubMed Scopus (101) Google Scholar, 28Strickland M.J. Riehle-Colarusso T.J. Jacobs J.P. et al.The importance of nomenclature for congenital heart disease: implications for research and evaluation.Cardiol Young. 2008; 18: 92-100Crossref PubMed Scopus (106) Google Scholar]. Several explanations of the poor diagnostic accuracy of administrative databases that use ICD codes are plausible: accidental miscoding, coding by medical records clerks who have not seen the patient, contradictory or inaccurate information in the medical record, lack of diagnostic specificity for cardiothoracic disease in the ICD codes, and inadequately trained coding clerks.Governmental administrative databases were created to facilitate billing; they do not accurately reflect CTS outcomes with the level of detail and accuracy necessary for useful outcome analysis. Meaningful analysis of CTS outcomes requires the use of clinical databases rather than administrative databases.The Role of the Lay PressLay press reports of CTS outcomes are problematic because the average health care reporter does not have the training and background to appreciate the science behind the analysis of medical and surgical outcomes. For example, on March 1, 2001, the Denver Post published a front-page article, "Children's Hospital Cardiology Chief Told to Resign." The reporter wrote, "T

Referência(s)