Response to Outlier Status
2017; Lippincott Williams & Wilkins; Volume: 135; Issue: 20 Linguagem: Inglês
10.1161/circulationaha.117.027896
ISSN1524-4539
Autores Tópico(s)Venous Thromboembolism Diagnosis and Management
ResumoHomeCirculationVol. 135, No. 20Response to Outlier Status Free AccessEditorialPDF/EPUBAboutView PDFView EPUBSections ToolsAdd to favoritesDownload citationsTrack citationsPermissions ShareShare onFacebookTwitterLinked InMendeleyReddit Jump toFree AccessEditorialPDF/EPUBResponse to Outlier StatusLessons From Public Reporting for Percutaneous Coronary Intervention Karen E. Joynt, MD, MPH Karen E. JoyntKaren E. Joynt From Brigham and Women's Hospital, Boston, MA; and Harvard T.H. Chan School of Public Health, Boston, MA. Originally published16 May 2017https://doi.org/10.1161/CIRCULATIONAHA.117.027896Circulation. 2017;135:1908–1910Article, see p 1897Public reporting of outcomes for percutaneous coronary intervention (PCI) has been taking place since New York initiated such a program in the 1990s. Pennsylvania followed suit in 2000, and Massachusetts shortly thereafter.1 Over this time, public reporting has been the subject of a great deal of controversy, with supporters arguing that public reporting drives critical improvements in care, and detractors arguing that it drives risk aversion and the denial of procedures to patients who may stand to benefit from receiving PCI.2In this issue of Circulation, Waldo et al3 put one specific element of public reporting under a microscope, with somewhat surprising results. The authors examine what happened to hospitals that were found to be outliers in mortality rates as part of the public reporting programs in Massachusetts and New York. This was a not-uncommon phenomenon, with 31 hospitals (36%) identified as outliers over the study period.Contrary to expectations, Waldo et al report that hospitals that were identified as negative outliers during the study period did not limit the PCIs they subsequently performed, demonstrating similar growth in PCI rates as nonoutlier hospitals. In addition, and perhaps even more important, hospitals identified as outliers demonstrated a significant reduction in mortality rates following identification as outliers, a benefit that was concentrated in patients receiving PCI. Taken together, these findings suggest that being identified as an outlier was associated with improvements in care quality.To be clear, the present study does not answer the question of whether public reporting is associated with better outcomes than not having public reporting, nor whether public reporting is associated with risk aversion; it only looked at hospitals within public reporting states. It does, however, suggest that, conditional on having a state public reporting system, being identified as a negative outlier is associated with improvements in care and outcomes, perhaps by spurring catheterization laboratory directors and other clinical leaders to focus their efforts on care improvement in an urgent manner.Are these findings real? There are a few major threats to validity, which the authors acknowledge and attempt to mitigate. The first is regression to the mean: it is possible that hospitals are identified as outliers because of random variance rather than true differences in quality, and therefore their subsequent improvement simply represents a return to their true performance. The authors evaluated this possibility using both time trends and positive outliers as active controls, and thus such a phenomenon seems unlikely to be driving their findings.The second threat to validity is changes in the patient population that were not detectable in claims data. It is possible that outlier hospitals began shifting care away from higher-risk patients,4 as was suggested in a prior study of the impact of outlier status,5 but at the same time documenting higher levels of comorbidity in the patients to whom PCI was offered. Such a shift in case selection, particularly if predicated on poorly measured elements such as functional status or cognitive status, would be difficult to detect in administrative data, but still potentially harmful for high-risk patients. The authors conducted a number of analyses to determine whether this was taking place, including examining coding of comorbidities over time, which were largely reassuring. However, subtleties in clinical decision making will always represent the blind spot of observational database research, and should be kept in mind.Follow-on research could clarify the issue further. Understanding what outlier hospitals did in response to their status might yield important knowledge. Were improvements gained through better processes of care? Better communication? Better patient selection? Better documentation? The answers to these questions could reassure those who are skeptical that improvements came through quality improvement rather than risk aversion.Another, broader issue to consider is how these findings might translate to other public reporting programs. Although only a small number of states have PCI public reporting programs, all 4000+ general acute-care hospitals nationally take part in Hospital Compare, which is a mandatory federal public reporting program that includes processes, outcomes, and costs of care, and patient experience measures, as well.6 Prior research has demonstrated, however, that hospitals named as outliers on Hospital Compare did not have any more improvement in mortality rates than nonoutlier hospitals.7 Indeed, neither group of hospitals demonstrated improvements in mortality after the launch of Hospital Compare. Voluntary reporting programs, such as those run by the American College of Cardiology, also exist,8 but there has been little study of their efficacy to date.Why might these 2 public reporting systems yield such different results? The programs have key differences that might explain their different effects. First, little attention is paid in the lay press to Hospital Compare outliers. However, a great deal of attention is paid to outliers at the state level under the PCI public reporting programs, with stories of high mortality rates at outlier hospitals featured in major newspapers. Second, Hospital Compare provides a vast number of potentially conflicting data elements. A hospital might be a negative outlier on readmissions for acute myocardial infarction but average or better on other measures, making it challenging for a consumer to interpret. In contrast, state reporting for PCI is a single measure, focused on a single patient population, and easily communicated to the consumer.It is ironic that the same features that likely make identification with outlier status under Hospital Compare a less powerful incentive for hospitals to improve are probably the same ones that make the Hospital Compare program a less powerful driver of risk aversion. If there is little consequence of outlier status, and if there are many simultaneous chances to win and lose, incentives to avoid high-risk patients or game the system are much weaker. Although there have been a few notable examples of potential gaming in the Hospital Compare system,9,10 there has not been a similar degree of concern over risk aversion as has been raised for PCI reporting.Higher stakes may yield greater benefit, as seen in the mortality improvements reported in this article by Waldo et al, but also come with greater risks, as seen in prior publications examining public reporting for PCI.11,12 Strategies that seek to mitigate these risks while maximizing benefits may ultimately have the greatest potential to improve care while preserving access to care.How might policy makers craft such strategies? There are a number of possibilities. State policy makers could consider pairing access measures with mortality measures in narrowly focused programs like public reporting for PCI, to specifically mitigate concern that risk aversion will lead to inappropriate avoidance of procedures. Another way to reduce risk aversion is to improve risk adjustment, or at least providers' impression of risk adjustment; one way to do this is by removing particularly sick patients with extremely high mortality rates from performance assessment. Recent data have shown that the exclusion of patients with cardiogenic shock from public reporting in New York was associated with an increase in the rate of use of PCI in this group, and a commensurate decrease in mortality13; this strategy may be adaptable to other settings.In contrast, federal policy makers could consider ways to make their public reporting more powerful, perhaps by enhancing the consumerism around the reports or by selecting a narrower set of key measures for greater emphasis. The recent move to a 5-star rating system on Hospital Compare, although controversial, may provide some of this focused attention for hospitals receiving a particularly low star rating.The article by Waldo et al represents a significant step forward in our understanding of how providers respond to public reporting, and may give us opportunities to learn from these programs' successes while we work to address their shortcomings.DisclosuresDr Joynt does contract work for the US Department of Health and Human Services, which administers the Hospital Compare program.FootnotesThe opinions expressed in this article are not necessarily those of the editors or of the American Heart Association.Circulation is available at http://circ.ahajournals.org.Correspondence to: Karen E. Joynt, MD, MPH, 677 Huntington Avenue, Boston, MA 02115. E-mail [email protected]References1. Wasfy JH, Borden WB, Secemsky EA, McCabe JM, Yeh RW. Public reporting in cardiovascular medicine: accountability, unintended consequences, and promise for improvement.Circulation. 2015; 131:1518–1527. doi: 10.1161/CIRCULATIONAHA.114.014118.LinkGoogle Scholar2. Dehmer GJ, Drozda JP, Brindis RG, Masoudi FA, Rumsfeld JS, Slattery LE, Oetgen WJ. Public reporting of clinical quality data: an update for cardiovascular specialists.J Am Coll Cardiol. 2014; 63:1239–1245. doi: 10.1016/j.jacc.2013.11.050.CrossrefMedlineGoogle Scholar3. Waldo SW, McCabe JM, Kennedy KF, Zigler CM, Pinto DS, Yeh RW. Quality of care at hospitals identified as outliers in publicly reported mortality statistics for percutaneous coronary intervention.Circulation. 2017; 135:1897–1907. doi: 10.1161/CIRCULATIONAHA.116.025998.LinkGoogle Scholar4. Resnic FS, Welt FG. The public health hazards of risk avoidance associated with public reporting of risk-adjusted outcomes in coronary intervention.J Am Coll Cardiol. 2009; 53:825–830. doi: 10.1016/j.jacc.2008.11.034.CrossrefMedlineGoogle Scholar5. McCabe JM, Joynt KE, Welt FG, Resnic FS. Impact of public reporting and outlier status identification on percutaneous coronary intervention case selection in Massachusetts.JACC Cardiovasc Interv. 2013; 6:625–630. doi: 10.1016/j.jcin.2013.01.140.CrossrefMedlineGoogle Scholar6. Centers for Medicare & Medicaid Services. Hospital Compare.https://www.medicare.gov/hospitalcompare/search.html. Accessed March 6, 2017.Google Scholar7. Joynt KE, Orav EJ, Zheng J, Jha AK. Public reporting of mortality rates for hospitalized Medicare patients and trends in mortality for reported conditions.Ann Intern Med. 2016; 165:153–160. doi: 10.7326/M15-1462.CrossrefMedlineGoogle Scholar8. American College of Cardiology. Quality Improvement for Institutions: ACC Public Reporting.https://cvquality.acc.org/NCDR-Home/Public-Reporting.aspx. Accessed March 6, 2017.Google Scholar9. Lindenauer PK, Lagu T, Shieh MS, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009.JAMA. 2012; 307:1405–1413.CrossrefMedlineGoogle Scholar10. McCabe JM, Kennedy KF, Eisenhauer AC, Waldman HM, Mort EA, Pomerantsev E, Resnic FS, Yeh RW. Reporting trends and outcomes in ST-segment-elevation myocardial infarction national hospital quality assessment programs.Circulation. 2014; 129:194–202. doi: 10.1161/CIRCULATIONAHA.113.006165.LinkGoogle Scholar11. Joynt KE, Blumenthal DM, Orav EJ, Resnic FS, Jha AK. Association of public reporting for percutaneous coronary intervention with utilization and outcomes among Medicare beneficiaries with acute myocardial infarction.JAMA. 2012; 308:1460–1468. doi: 10.1001/jama.2012.12922.CrossrefMedlineGoogle Scholar12. Waldo SW, McCabe JM, O'Brien C, Kennedy KF, Joynt KE, Yeh RW. Association between public reporting of outcomes with procedural management and mortality for patients with acute myocardial infarction.J Am Coll Cardiol. 2015; 65:1119–1126. doi: 10.1016/j.jacc.2015.01.008.CrossrefMedlineGoogle Scholar13. McCabe JM, Waldo SW, Kennedy KF, Yeh RW. Treatment and outcomes of acute myocardial infarction complicated by shock after public reporting policy changes in New York.JAMA Cardiol. 2016; 1:648–654. doi: 10.1001/jamacardio.2016.1806.CrossrefMedlineGoogle Scholar Previous Back to top Next FiguresReferencesRelatedDetailsCited By Lloren A, Liu S, Herrin J, Lin Z, Zhou G, Wang Y, Kuang M, Zhou S, Farietta T, McCole K, Charania S, Dorsey Sheares K and Bernheim S (2019) Measuring hospital-specific disparities by dual eligibility and race to reduce health inequities, Health Services Research, 10.1111/1475-6773.13108, 54, (243-254), Online publication date: 1-Feb-2019. May 16, 2017Vol 135, Issue 20 Advertisement Article InformationMetrics © 2017 American Heart Association, Inc.https://doi.org/10.1161/CIRCULATIONAHA.117.027896PMID: 28507249 Originally publishedMay 16, 2017 Keywordshealth care quality, access, and evaluationEditorialshospital mortalitypercutaneous coronary interventionPDF download Advertisement
Referência(s)