Trust, But Verify
2019; Lippincott Williams & Wilkins; Volume: 12; Issue: 7 Linguagem: Alemão
10.1161/circoutcomes.119.005942
ISSN1941-7705
Autores Tópico(s)Emergency and Acute Care Studies
ResumoHomeCirculation: Cardiovascular Quality and OutcomesVol. 12, No. 7Trust, But Verify Free AccessEditorialPDF/EPUBAboutView PDFView EPUBSections ToolsAdd to favoritesDownload citationsTrack citationsPermissionsDownload Articles + Supplements ShareShare onFacebookTwitterLinked InMendeleyReddit Jump toSupplemental MaterialFree AccessEditorialPDF/EPUBTrust, But Verify Brahmajee K. Nallamothu, MD, MPH Brahmajee K. NallamothuBrahmajee K. Nallamothu Brahmajee K. Nallamothu, MD, MPH, University of Michigan Medical School, Internal Medicine-Cardiovascular Medicine, N Campus Research Complex, 2800 Plymouth Rd, Bldg 16, Ann Arbor, MI 48109 Email E-mail Address: [email protected] Department of Internal Medicine, Michigan Integrated Center for Health Analytics and Medical Prediction, University of Michigan Medical School, Ann Arbor. Center for Clinical Management and Research, Ann Arbor VA Medical Center, MI. Originally published9 Jul 2019https://doi.org/10.1161/CIRCOUTCOMES.119.005942Circulation: Cardiovascular Quality and Outcomes. 2019;12:e005942Trust, but verify.—Russian ProverbThere is a problem with transparency and replication in health services and outcomes research. And we all know it.A recent controversy underscores this issue: In April 2010, the Hospital Readmission Reduction Program (HRRP) was announced by the Centers for Medicare and Medicaid Services as part of the Affordable Care Act. The program encouraged hospitals to reduce unnecessary readmissions by penalizing poor-performing hospitals. After reporting outcomes without penalties for a period of time, the HRRP was implemented with penalties in October 2012 targeting 3 conditions: acute myocardial infarction, heart failure, and pneumonia. Initial evaluations of this program suggested readmissions were reduced with substantial savings to the Centers for Medicare and Medicaid Services and penalties paid.1,2Yet concern about unintended consequences of the HRRP remained, including whether hospitals were incentivized to avoid higher-risk patients. In 2017, Yale researchers using Medicare claims data reported in JAMA that hospitals that reduced 30-day readmission did not have higher 30-day mortality rates—a reassuring result.3 In early 2018, however, a provocative analysis published in JAMA Cardiology using the American Heart Association Get With The Guidelines–Heart Failure Registry found a disturbing trend towards increased 30-day and 1-year mortality after implementation of the HRRP.4A flurry of activity quickly followed this last report. First, 2 studies using Medicare claims data—one by the Medicare Payment Advisory Commission5 and another from Yale published in JAMA Network Open6—suggested no signals of harm with the HRRP. Yet just a few months after these were reported, Harvard investigators also using Medicare claims data reported a dramatically different conclusion in JAMA: introduction of the HRRP was associated with a rise in mortality after heart failure and pneumonia discharges.7 This article—published with a concomitant opinion piece in the New York Times—caused an immediate firestorm. Fierce debate ensued across social media about different analytical choices and methodologies, as well as underlying financial and intellectual conflicts of interest of the investigators.So did the introduction of the HRRP lead to a reduction in readmissions or an increase in mortality? At this point, I honestly have no (expletive) idea.What do I know? First, the groups working in this space include exceptional investigators—some of the best I know—who have frequently informed national policies over the years based specifically on their research of Medicare claims data. These teams are the real deal. Second, these groups arrived at their conclusions after careful planning and thoughtful analyses. I have been fortunate enough to discuss the HRRP with many of them. In my opinion, these researchers are serious individuals who are trying their best to get at the heart of whether HRRP has been helpful or harmful. There is no doubt that a big part of this challenge is because the question itself is so hard to answer.But none of that explains why these teams came to startlingly different conclusions in spite of using similar (if not exact) data sources? One potentially obvious reason would be their analytical choices. Investigators frequently make dozens of decisions throughout a study on topics ranging from cohort creation to comorbidity adjustment to statistical tests of comparison. These analytical choices can substantially impact on results. And although this should be an easy issue to explore, it has not been in the case of the HRRP. That is because we have a system that continues to undervalue transparency when it comes to sharing our science. In not a single study I describe above are statistical code and data readily available for public review. Without access to either, readers have limited ability to assess why differences exist.This issue of greater transparency is separate from complaints of better study designs and interpretation. It is clear that we need better approaches for rolling out healthcare policy interventions as others have highlighted in recent months.8,9 Some could also argue that numerical differences in results across these reports are actually more modest than their widely contradictory interpretations. This may be due to our over-reliance on statistical tests of significance, P values, and dichotomania. Yet the point I am raising is more direct than these larger questions about underlying principles of scientific inquiry. Why are not statistical code and data made more routinely available at the time of publication?Many of us know the arguments and challenges well enough. Indeed, I readily admit that I contribute to the problem by being hesitant to share statistical code or data myself. Sometimes this was for legitimate reasons: privacy concerns; the substantial time investment in making such information readable and usable; and uncertainty of the intent or qualifications of others with using data or statistical code. But I will also admit to less than noble motivations. It takes substantial time and effort to collect data and write code, and so just releasing it (when others do not) seems unfair in our current system. And if I am truly honest, there is probably a part of me that is afraid of being embarrassed if errors are identified. Thus, it is entirely understandable for investigators to hesitate, especially when it is not required. Yet these excuses are more difficult to justify over time, and we need to find creative solutions to overcome these cultural barriers.Ironically, this means we would be following the practices of industry-sponsored clinical trials—a group frequently maligned for their conflicts of interest. They have shown that this can be done without sacrificing key principles around privacy concerns through programs like the Yale Open Data Access Project.10 And at the very least even if we are unable to share data, statistical code should be released. Recent articles at the American Economic Review provide examples of how to routinely share statistical code and data even for analyses of Medicare data (by restricting the information that is ultimately shared to only essential elements).11 Even a recent article evaluating the HRRP shared statistical code at the time it was published in the Annals of Internal Medicine—a journal with a long-standing policy for sharing data and code when possible.12,13Both these examples show us this road is possible: my research team recently published 2 analyses of Medicare claims data where we have shared our statistical code on the online software development platform Github, so that others may replicate our work. Furthermore, we are exploring ways to consider generating synthetic or semisynthetic datasets that may allow data to be shared as well statistical code. I encourage you to read the fascinating article by Beaulieu-Jones et al14 published in this issue of Circulation: Cardiovascular Quality and Outcomes. In this work, the investigators explored training generative additive networks (a type of deep neural network) with differential privacy rules to generate synthetic patients that maintained summary statistics and multivariable relationships at a cohort-level. This type of innovative work points to a potential future where barriers to data sharing that result from privacy concerns could fall dramatically.However, even these advances will not be enough without a major shift in our culture. One immediate solution that could jump-start this movement would be for journals to start requiring investigators to post statistical code in all circumstances and data when possible. These should be actual postings rather than merely statements about their availability on request. Ideally, it would be set up as an expectation upfront and not done an ad hoc basis. This would move our scientific community from transparency in description of methods and data collection—the current standard we strive for in journals—to public availability and reusability of study-specific information. Since the mid-1990s, many fields within empirical social science have made such advances. For example, journals published by the American Economic Association require the following: "Authors of accepted papers that contain empirical work, simulations, or experimental work must provide, before publication, the data, programs, and other details of the computations sufficient to permit replication."15Of course, the largest benefit will be to our science. As someone who conducts research using Medicare claims data and similar data sources, I know these studies can be complex and often require hundreds of lines of statistical code. Inadvertent bugs with massive consequences are not outside the realm of possibility as other disciplines have discovered.16,17 Given such uncertainty, how can we, as consumers of this research, adequately judge these studies without greater transparency? Should we believe the latest one? Or the one published in the highest impact factor journal? Or the one covered by the New York Times? All of those options seem wrong, and more transparency may be one way to fix it.In June 2018, the New England Journal of Medicine published an article on the mortality effects of Hurricane Maria in Puerto Rico.18 What was most extraordinary about this study was not its results but the authors' willingness to commit fully to transparency. They wrote: "These adjustments represent one simple way to account for biases, but we have made our data publicly available for additional analyses." Translation—this is one way to do it, show us how to do it better. Almost immediately, the data and statistical code were downloaded by method and content experts and examined for discrepancies and errors. The robustness of their findings to this level of scrutiny was critical as the study directly contradicted the US government's official statistics. Imagine instead if the authors had published the article and simply stated, "Trust us."The time has come for us to take Open Science seriously in health services and outcomes research. At Circulation: Cardiovascular Quality and Outcomes, we have provided for several years an opportunity for authors to share data and code in online appendices. Only a few have taken advantage of this. Now, we are making a commitment that sharing data and statistical code will be an explicit criterion for us in evaluating manuscripts. This does not mean that we will reject manuscripts that choose not to share data and code. But we will be more direct about having the authors state their objections with doing so (consistent with the American Heart Association Transparency and Openness Promotion Guidelines) and examine more favorably manuscripts that do offer to share. Circulation: Cardiovascular Quality and Outcomes cannot do this alone. I am hopeful that other editors—particularly those who lead our foremost journals—begin to do the same.Very few answers in science are clear-cut or simple. Most depend on the critical assumptions we make as part of an iterative process in research. It is time to make those choices clear for others to see, assess, debate, replicate, and build upon. It is the only way we can ultimately move toward consensus and understanding.We need to trust, but verify.DisclosuresDr Nallamothu is a principal investigator or coinvestigator on research grants from the National Institutes of Health, Veterans Affairs Health Services Research & Development, the American Heart Association, and Apple, Inc. He also receives compensation as Editor-in-Chief of Circulation: Cardiovascular Quality & Outcomes, a journal of the American Heart Association. Finally, he is a coinventor on US Utility Patent Number US15/356,012 (US20170148158A1) entitled Automated Analysis of Vasculature in Coronary Angiograms that uses software technology with signal processing and machine learning to automate the reading of coronary angiograms, held by the University of Michigan. The patent is licensed to AngioInsight, Inc, in which he holds ownership shares.FootnotesThe opinions expressed in this article are not necessarily those of the American Heart Association.Brahmajee K. Nallamothu, MD, MPH, University of Michigan Medical School, Internal Medicine-Cardiovascular Medicine, N Campus Research Complex, 2800 Plymouth Rd, Bldg 16, Ann Arbor, MI 48109 Email [email protected]eduReferences1. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, observation, and the hospital readmissions reduction program.N Engl J Med. 2016; 374:1543–1551. doi: 10.1056/NEJMsa1513024CrossrefMedlineGoogle Scholar2. Desai NR, Ross JS, Kwon JY, Herrin J, Dharmarajan K, Bernheim SM, Krumholz HM, Horwitz LI. Association between hospital penalty status under the hospital readmission reduction program and readmission rates for target and nontarget conditions.JAMA. 2016; 316:2647–2656. doi: 10.1001/jama.2016.18533CrossrefMedlineGoogle Scholar3. Dharmarajan K, Wang Y, Lin Z, Normand ST, Ross JS, Horwitz LI, Desai NR, Suter LG, Drye EE, Bernheim SM, Krumholz HM. Association of changing hospital readmission rates with mortality rates after hospital discharge.JAMA. 2017; 318:270–278. doi: 10.1001/jama.2017.8444CrossrefMedlineGoogle Scholar4. Gupta A, Allen LA, Bhatt DL, Cox M, DeVore AD, Heidenreich PA, Hernandez AF, Peterson ED, Matsouaka RA, Yancy CW, Fonarow GC. Association of the hospital readmissions reduction program implementation with readmission and mortality outcomes in heart failure.JAMA Cardiol. 2018; 3:44–53. doi: 10.1001/jamacardio.2017.4265CrossrefMedlineGoogle Scholar5. Medicare Payment Advisory Commission. Mandated Report: The Effects of the Hospital Readmissions Reduction Program.http://www.medpac.gov/docs/default-source/reports/jun18_ch1_medpacreport_sec.pdf?sfvrsn=0. Accessed February 11, 2019. Google Scholar6. Khera R, Dharmarajan K, Wang Y, Lin Z, Bernheim SM, Wang Y, Normand ST, Krumholz HM. Association of the hospital readmissions reduction program with mortality during and after hospitalization for acute myocardial infarction, heart failure, and pneumonia.JAMA Netw Open. 2018; 1:e182777. doi: 10.1001/jamanetworkopen.2018.2777CrossrefMedlineGoogle Scholar7. Wadhera RK, Joynt Maddox KE, Wasfy JH, Haneuse S, Shen C, Yeh RW. Association of the hospital readmissions reduction program with mortality among medicare beneficiaries hospitalized for heart failure, acute myocardial infarction, and pneumonia.JAMA. 2018; 320:2542–2552. doi: 10.1001/jama.2018.19232CrossrefMedlineGoogle Scholar8. Newhouse JP, Normand S-LT. Health policy trials.N Engl J Med. 2017; 376:2160–2167. doi: 10.1056/NEJMra1602774CrossrefMedlineGoogle Scholar9. Wadhera RK, Bhatt DL. Toward precision policy - the case of cardiovascular care.N Engl J Med. 2018; 379:2193–2195. doi: 10.1056/NEJMp1806260CrossrefMedlineGoogle Scholar10. The YODA Project. Welcome to the YODA Project. http://yoda.yale.edu/welcome-yoda-project. Accessed February 11, 2019.Google Scholar11. Cabral M, Geruso M, Mahoney N. Do larger health insurance subsidies benefit patients or producers? evidence from medicare advantage.Am Econ Rev. 2018; 108:2048–2087.CrossrefMedlineGoogle Scholar12. Wasfy JH, Zigler CM, Choirat C, Wang Y, Dominici F, Yeh RW. Readmission rates after passage of the hospital readmissions reduction program: a pre-post analysis.Ann Intern Med. 2017; 166:324–331. doi: 10.7326/M16-0185CrossrefMedlineGoogle Scholar13. Localio AR, Goodman SN, Meibohm A, Cornell JE, Stack CB, Ross EA, Mulrow CD. Statistical code to support the scientific story.Ann Intern Med. 2018; 168:828–829. doi: 10.7326/M17-3431CrossrefMedlineGoogle Scholar14. Beaulieu-Jones BK, Wu ZS, Williams C, Lee R, Bhavnani SP, Byrd JB, Greene CS. Privacy-preserving generative deep neural networks support clinical data sharing.Circ Cardiovasc Qual Outcomes. 2019; 12:e005122. doi: 10.1161/CIRCOUTCOMES.118.005122Google Scholar15. American EconomicAssociation. Data Availability Policy. https://www.aeaweb.org/journals/policies/data-availability-policy. Accessed February 11, 2019.Google Scholar16. Krugman P. Opinion The Excel Depression.New York Times. October 19, 2018. https://www.nytimes.com/2013/04/19/opinion/krugman-the-excel-depression.html. Accessed February 11, 2019.Google Scholar17. Bailey DH, Borwein J. The Reinhart-Rogoff error – or how not to Excel at economics. The Conversation. http://theconversation.com/the-reinhart-rogoff-error-or-how-not-to-excel-at-economics-13646. Accessed February 11, 2019.Google Scholar18. Kishore N, Marqués D, Mahmud A, Kiang MV, Rodriguez I, Fuller A, Ebner P, Sorensen C, Racy F, Lemery J, Maas L, Leaning J, Irizarry RA, Balsari S, Buckee CO. Mortality in Puerto Rico after Hurricane Maria.N Engl J Med. 2018; 379:162–170. doi: 10.1056/NEJMsa1803972CrossrefMedlineGoogle Scholar Previous Back to top Next FiguresReferencesRelatedDetailsCited By Lu H and Daugherty A (2022) Key Factors for Improving Rigor and Reproducibility: Guidelines, Peer Reviews, and Journal Technical Reviews, Frontiers in Cardiovascular Medicine, 10.3389/fcvm.2022.856102, 9 Nallamothu B (2021) Of Papers, PDFs, and Platforms, Circulation: Cardiovascular Quality and Outcomes, 14:9, (e008466), Online publication date: 1-Sep-2021. DeBlanc J, Kay B, Lehrich J, Kamdar N, Valley T, Ayanian J and Nallamothu B (2020) Availability of Statistical Code From Studies Using Medicare Data in General Medical Journals, JAMA Internal Medicine, 10.1001/jamainternmed.2020.0671, 180:6, (905), Online publication date: 1-Jun-2020. July 2019Vol 12, Issue 7 Advertisement Article InformationMetrics © 2019 American Heart Association, Inc.https://doi.org/10.1161/CIRCOUTCOMES.119.005942PMID: 31284740 Originally publishedJuly 9, 2019 Keywordsethicspublishing/ethicshumansreproducibilitybiomedical researchPDF download Advertisement SubjectsQuality and Outcomes
Referência(s)