Editorial Revisado por pares

Biomedical research: a house of cards?

2015; Future Science Ltd; Volume: 8; Issue: 1 Linguagem: Inglês

10.4155/fmc.15.171

ISSN

1756-8927

Autores

Gerald H. Lushington, Rathnam Chaguturu,

Tópico(s)

Biomedical Ethics and Regulation

Resumo

Future Medicinal ChemistryVol. 8, No. 1 EditorialFree AccessBiomedical research: a house of cards?Gerald H Lushington & Rathnam ChaguturuGerald H Lushington LiS Consulting, 2933 Lankford Dr, Lawrence, KS 66046, USA & Rathnam Chaguturu*Author for correspondence: E-mail Address: mchaguturu@gmail.com iDDPartners, 3 Edith Court, Princeton Junction, NJ 08550, USAPublished Online:21 Dec 2015https://doi.org/10.4155/fmc.15.171AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinkedInRedditEmail Keywords: biomedical researchdeliberate misconductirreproducibility crisismisplaced prioritiesFirst draft submitted: 12 October 2015; Accepted for publication: 16 October 2015Twenty eight billion dollars is the estimated sum of money the USA alone spends each year on preclinical bioscience research that eventually turns out to be irreproducible [1]. Simply put, a single scientific subdiscipline within a single country wastes more money through flawed research than the entire annual gross domestic products of nearly half of the world's nations [2]. Given the daunting health and biomedical challenges faced by humanity, can we truly afford such profligacy?The cause of this problem and potential remedies remain elusive [3,4]. A measured analysis of the biomedical reproducibility crisis indicates that problems are far greater than can be accounted for either by intentional fraud or willful negligence [5]. Shocking research misconduct may dominate scientific headlines, but the most prevalent contributors to the irreproducibility crisis include poor validation of materials, instruments and protocols, and mis- or over-interpretation of data [1,5–7].If deliberate research falsification is rare, and most scientists are well intended, how can it be that more than half of all published bioscientific research reports are flawed [1]? Depending on how one frames the problem and its metrics, it has been tempting to primarily blame irresponsible scientists [8], but should not the sheer scope and prevalence of the problem lead us to question whether the institution of science itself is fundamentally broken? Is this a failure, not so much by scientists, as of the priorities imposed on them?The practice of science is motivated by priorities. If a research study produces breakthroughs deemed important enough to catch your eye, that means that it has been promoted above numerous other published or unpublished studies. To illustrate this, consider the life cycle of a research project. A project is conceived if scientists analyze the existing scientific knowledge base (the journals, mass media, databases, etc.) and identify a conceptual gap that they have the intuition, skills and strategies to address. The hypothesis-driven project evolves to practical investigation if it is prioritized by people or agencies willing to support it with funds and resources. The project matures and grows if it produces enough new scientific understanding, as judged by peer reviewers, to warrant enshrining back in to the scientific knowledge base from which it was inspired.Ideally, this forms a positive feedback cycle for growing a compendium of knowledge that supports increasingly reliable and sophisticated new studies. To a large extent, science has always been self-correcting and self-policing, and most researchers are sincerely honest; however, the cycle has begun breaking down under the accumulation of dubious or false information within the core knowledge base, thus compromising the informational foundation of countless future studies. Somehow, a prioritization engine that designed computers, put men on the moon and solved genomic codes is now showing alarming signs of dysfunction.So how did our pool of critical knowledge get so contaminated? Whether fraud, negligence or other more ignorant/innocent flaws, the problems ultimately arise from a misguided system of incentives. Proposal funding and manuscript publication rely upon peer review according to a series of key questions: What is the prospective impact of the proposal/report? How novel is the study? What is the reputation of the authors? How feasible/believable is the study?Although seemingly appropriate, these adjudication metrics are clearly no longer adequate for safeguarding the integrity of our scientific knowledge base. In recent years, peer-evaluated scientific research has been scrutinized relative to the all important arena of real-world outcomes, and has been found to harbor nearly endemic flaws. The precise quantification of irreproducibility rates remains a challenge, ungoverned by universal evaluation metrics. However, a variety of recent surveys have emerged, which suggest that between 50 and 90% of key findings in basic science studies have questionable reproducibility [1,9], and similar statistics apply to clinical observational assessments [10].Even discounting the economic cost, there are serious societal implications to this failure rate. For example, in clinical settings, irreproducibility could mean the difference between a drug that is safe and effective, versus one with questionable efficacy or safety. In the preclinical arena, the long-term consequences may actually be even worse in that a tainted scientific study that is exposed may reduce public confidence (and potentially also public funding) in science, but one that remains undetected may have effects even more deleterious, by degrading the knowledge base upon which countless future studies may unwittingly attempt to build. Until systemic trust can be re-established, we are bound to miss many tantalizing opportunities to advance the human condition because our jaded society simply does not know what to believe. Distinguishing sense from nonsense thus becomes an exercise in futility.The recent surge of public commentary decrying methodological failures in funded, published science has prompted a spate of recommendations aimed at slowing and correcting this trend. A sample of commonsense recommendations include: Ensure global accessibility to detailed protocol information, raw data, metadata and numerical manipulations [4]Increase the amount of funding available for reproducibility studies [11]Implement more exacting standards for statistical evaluation [10]Update and disseminate standard best practices for the design and validation of experiments [12] [Vaux D, pers. comm.].Digitally scrutinize gel blots, microscopic images, etc., for tampering and misuse [4]Evaluate and curate technical protocols independently of manuscripts and proposals [4]Provide effective safeguards to protect those who report suspected fraud [4,5]Make principle investigators more accountable for what they publish [4]Empower law enforcement to investigate suspected fraud and prosecute fittingly [4]While such corrective measures make sense, applying each measure individually is like pasting many small bandages over a gaping wound. Why not directly integrate a more rigorous mindset into the mechanisms for funding and publishing research results? To illustrate this, let us scrutinize some key examples of the kinds of considerations that often play substantial roles in manuscript or proposal review, with mind to finding opportunities to refine the peer review mentality in ways that better foster reproducibility.Scientific impactFunding panels assess proposals for relevance to key topics and applications identified by the agency, but most final funding decisions hinge on the question of how interesting and important the project sounds, and what chance there is for producing big breakthroughs. Similarly, manuscripts with sensational prospective implications tend to rise above those that are merely relevant, realistic and interesting.The impact measure is familiar to most researchers and probably sounds superficially reasonable, but it is not optimally aligned to the fundamental scientific objective of advancing our core knowledge. Proposal and publication approval slants excessively toward studies that confirm original hypotheses, overlooking the fact that honest reporting of negative research results is invaluable in guiding future generations of scientists away from unproductive lines of investigation and instead toward alternative paths of study. As well, the focus on well-articulated hypotheses itself may be damaging our scientific prospects. While many NIH (US) program officers once denigrated the 'look and see' type of proposal (e.g., a broader based screening platform capable of producing relevant insight without preconceived notions of eventual findings), stated hypotheses often become self-fulfilling, although not necessarily in a replicable manner.Aspects of the above impact concept remain useful, but evaluation should be broadened to assess the chance that a proposed study will shape the landscape of scientific understanding, regardless of whether the study is based on explicit hypotheses, and even if hypotheses are ultimately disproven. Similarly, in an ideal world, manuscripts reporting negative findings would be commended and advanced to publication as long as they are likely to provide beneficial guidance to subsequent research.NoveltyInnovation is justifiably fetishized in the arts, but does an obsession with funding and publishing transformative science actually reflect sound strategy? On a very basic level, if publication in highest echelon journals is conditional upon reporting unprecedented conclusions, might this not slant the unwitting scientist toward data interpretations that are more unusual, even if more mundane principles could have been inferred? As well, if a proposed new study employs techniques or applications that differ radically from prior studies, does that make it any more likely to produce practical new knowledge than would incremental progress based more directly on prior understanding? If a manuscript reports a truly transformative discovery or the application of a wholly new technique, how many suitable reviewers will actually be able to produce reliable, insightful peer assessments of the technical fidelity of the work? If a high degree of apparent novelty introduces greater risks in achieving reasonable data interpretation and rigorous validation, then how does this not exacerbate the irreproducibility problem?Novelty stirs the human imagination and can engender great enthusiasm, but one need to look no further than the stock market to understand the perils of unchecked enthusiasm. Science sometimes does advance in quantum leaps that vault past existing building blocks, and we should encourage such prospects, just as we may hold speculative penny stocks in our retirement portfolio. However, if we aspire to live comfortably into our old age, we should look for healthy balance both in our personal financial portfolio and in our global biomedical science investments.Investigator reputationDespite a strong commitment to objectivity, human nature frequently plays a subtle role in favoring parties with whom we have familiarity. If a reviewer has read papers by scientist X, but is unfamiliar with scientist Y, it may be human nature to unconsciously assume X has produced higher quality research than Y. By this measure, a scientist who averages 20 publications per year is more likely, by virtue of sheer name recognition, to inspire more karmic confidence than one who publishes only twice per year. But, is it logical to expect that the hyperpublishing scientist will truly dedicate to every publication the intensive self-scrutiny necessary to foster consistently reproducible research? Is the hypopublisher much more careful, or just unproductive?The intelligent response is, who knows? Degree of exposure, number of publications and funding history are no guarantees of reproducible research. The best metric for projecting future reproducibility is a carefully articulated validation plan. If one really truly wants to encode a track record metric, then our discipline should begin objectively tracking how reproducible a scientist's past work has proven to be!Feasibility & believabilityThe metrics of feasibility and believability encoded into current proposal and manuscript reviewing do have strong relationships with reproducibility. Unfortunately these criteria rarely carry the weight of the first three factors, yet we ignore them at our own peril. If a graduate student across the hall reported that he has just created new pluripotent stem cells by dosing adult mouse cells with an acidic solution, who would believe him? So how could a prestigious journal, such as Nature, have accepted an article by a prominent research laboratory making precisely that claim [13,14]? Might this arise because too many reviewers value impact, novelty and reputation above believability?In addition to being afforded more weight, feasibility and believability metrics should be more rigorously specified. Clear and correct reporting of statistics, universally available raw data and rigorous, well-articulated protocols for external validation are all essential. Such rigorous documentation is a profound demonstration of good faith, and can salvage value from studies in which errors do crop up, by providing the means for others to readily detect or correct problems rather than leaving them to fester in our knowledge base.The above revisions to evaluation criteria amount to recognition of a problem, and to a preliminary attempt to address it. The well-intended reward system for fostering biomedical research has been too slow in adjusting to the psychological, statistical and analytical complexities of 21st century science and is beginning to fail. The aforementioned recommendations for modifying evaluations criteria are intended to spur dialog to re-introduce a healthy balance into the prioritization process. On a more basic level, this balance includes the following exhortations:Publish meaningful negative results!The obsession with reporting transformative breakthroughs has been denying us the benefits of the foundational 'process of elimination'. Discourse on disproven hypotheses and the quirky means by which experiments may fail are shunned by journals and incur deep stigma in grant progress reports. Ultimately, if the culture shift required to promote the valuable disclosure of negative findings proves to be too monumental for our current publishing enterprise to foster, perhaps a vigorous promotion of alternative dissemination modes such as centrally supported, self-publication portals may prove adequate.Solid fundamentalsIn our era of hyperspecialized subdisciplines, the cult of scientific transformation is producing research techniques that are so esoteric that we risk losing any real expectation of rigorous peer review. Junior scientists spend more and more training on sophisticated equipment, at the expense of simple, core lab protocol and fundamental grounding in principles of experiment design, data interpretation and validation. Is it any wonder that faulty materials and reagents are the single greatest cause of research irreproducibility [1]? Ultimately, greater sophistication in research methodology will be critical to future achievements, but technological advance only serves us as long as those who apply new technology are fully able to grasp not only its strengths, but also the limitations and intricate ways that it may fail or produce misleading results.Show your workThe ever-shrinking research proposal/application length has degraded the ability of reviewers to reliably assess fundamental procedural and numerical detail, so who truly knows whether the proposed methodology is viable? Competition for print space in many high-impact journals is squeezing publications into a capsular format that glosses over critical technical information, thus defying external validation. Many scientific validation studies fail not because of mistakes in the original study, but rather from incomplete protocol specifications available for those trying to reproduce the study [1,13]. Journals that are addressing this issue should be lauded, while those who yet lag in this rigor should be gently encouraged to enhance the informational content of their publications.Furthermore, journal editors and program officers would do our community a great informational service by publishing original critiques and author responses for future reference [15,16].The next time a retraction catches our attention, certainly we can look at the study to see what the authors did wrong. But we should also ask ourselves, what was it that they were not given adequate encouragement to do it right?To illustrate the precarious situation we find ourselves in, imagine the collapse of a high-rise housing complex and the associated risk to human life. Whom do you have to blame: the building contractor, faulty materials, poor workmanship, the handyman, the building inspector or the imprecise building codes? Applying this scenario to the current biomedical research 'house of cards' should send tremors down the spines of all those involved. The house has not yet completely collapsed, but there is no time like the present to set it right.Financial & competing interests disclosureThe authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.No writing assistance was utilized in the production of this manuscript.References1 Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol. 13(6), e1002165 (2015).Crossref, Medline, Google Scholar2 Report for selected country groups and subjects. World Economic Outlook. International Monetary Funds. http://probeinternational.org/library/wp-content/uploads/2011/12/Report-for-Selected-Country-Groups-and-Subjects.pdfGoogle Scholar3 Lushington GH, Chaguturu R. A systemic malady: the pervasive problem of misconduct in the biomedical sciences. Part I: issues and causes. Drug Discovery World 16, 79–90 (2015).Google Scholar4 Lushington GH, Chaguturu R. A systemic malady: the pervasive problem of misconduct in the biomedical sciences. Part II: detection and prevention. Drug Discovery World 15, 70–82 (2015).Google Scholar5 Gunn W. Reproducibility: fraud is not the big problem. Nature 505, 483 (2014).Crossref, Medline, CAS, Google Scholar6 Aschwanden C. Science isn't broken. It's just a hell of a lot harder than we give it credit for. Five Thirty Eight. http://fivethirtyeight.com/features/science-isnt-brokenGoogle Scholar7 von Bubnoff A. Special Report. Biomedical research: are all the results correct? Burroughs Wellcome Fund. www.bwfund.org/newsroom/newsletter-articles/special-report-biomedical-research-are-all-results-correctGoogle Scholar8 Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc. Natl Acad. Sci. USA 109(42), 17028–17033 (2012).Crossref, Medline, CAS, Google Scholar9 Pritsker M. Studies show only 10% of published science articles are reproducible. What is happening? http://www.jove.com/blog/2012/05/03/studies-show-only-10-of-published-science-articles-are-reproducible-what-is-happeningGoogle Scholar10 Young S, Karr A. Deming, data and observational studies. A process out of control and needing fixing. Significance 8, 116–120 (2011).Crossref, Google Scholar11 Iorns E. http://blog.scienceexchange.com/2013/10/reproducibility-initiative-receives-1-3m-grant-to-validate-50-landmark-cancer-studies/Google Scholar12 Begley CG, Ellis LM. Nature 483(7391), 531–533 (2012).Crossref, Medline, CAS, Google Scholar13 von Bubnoff A. www.bwfund.org/newsroom/newsletter-articles/special-report-biomedical-research-are-all-results-correctGoogle Scholar14 Obokata H, Wakayama T, Sasai Y et al. Nature 505, 641 & 676 (2014).Google Scholar15 Chaguturu R. Scientific misconduct, editorial. Comb. Chem. High Throughput Screen. 17(1), 1 (2014).Crossref, Medline, CAS, Google Scholar16 Chaguturu R. Collaborative Innovation in Drug Discovery: Strategies for Public and Private Partnerships. Wiley and Sons. NY, USA (2014).Crossref, Google ScholarFiguresReferencesRelatedDetailsCited ByInterpreting and Implementing Evidence for Quality Research9 September 2022How standardization of the pre-analytical phase of both research and diagnostic biomaterials can increase reproducibility of biomedical research and diagnosticsNew Biotechnology, Vol. 53The academic–industrial complex: navigating the translational and cultural divideDrug Discovery Today, Vol. 22, No. 7Academia–pharma partnerships for novel drug discovery: essential or nice to have?17 April 2017 | Expert Opinion on Drug Discovery, Vol. 12, No. 6Collaborative Strategies for Future Drug DiscoveryRighting the Ship Vol. 8, No. 1 STAY CONNECTED Metrics History Published online 21 December 2015 Published in print January 2016 Information© Future Science LtdKeywordsbiomedical researchdeliberate misconductirreproducibility crisismisplaced prioritiesFinancial & competing interests disclosureThe authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.No writing assistance was utilized in the production of this manuscript.PDF download

Referência(s)