The Ethics of Scientific Publishing: Black, White, and “Fifty Shades of Gray”
2017; Elsevier BV; Volume: 99; Issue: 2 Linguagem: Inglês
10.1016/j.ijrobp.2017.06.009
ISSN1879-355X
Autores Tópico(s)Ethics in Clinical Research
ResumoScientific discovery has been reported in print journals since 1667, and the entire associated process of experimentation, manuscript writing, peer review, publication, and discussion has withstood the test of the centuries. One might, thus, consider it highly evolved, effective, and resilient. With more than 25,000 diverse medical journals currently in existence, some catering to very niche areas, few can doubt that it is highly evolved (1Fraser A. Dunstan F. On the impossibility of being expert.BMJ. 2010; 341: c6815Crossref PubMed Scopus (64) Google Scholar). Its efficacy is, however, in serious question, and with that, its resilience to survive into the future. Confidence among scientists in scientific reporting is now extremely low, with the lowest levels being found among medical researchers. Begley and Ellis in a survey of more than 1500 investigators, found that whereas those working in physics and engineering had reasonable confidence in the work published in their fields, the vast majority of those in medicine believe that more than half of published results are simply not reproducible (2Begley C.G. Ellis L.M. Drug development: Raise standards for preclinical cancer research.Nature. 2012; 483: 531-533Crossref PubMed Scopus (1881) Google Scholar). Irreproducibility may have many causes. Certainly the authors must bear responsibility through a failure of scientific rigor, honest error, or willful misbehavior; but the responsibility is shared with those who publish the work, either through a failure of the review process or an over-eagerness of editors to publish positive results. If research is flawed then, when discovered, editors currently turn to errata for small corrections and retractions for work that is more egregiously flawed or misleading. The retraction rate has increased dramatically over the last decade and is growing at a rate that exceeds the growth in the number of manuscripts published over the same period (3Cokol M. Ozbay F. Rodriguez-Esteban R. Retraction rates are on the rise.EMBO Rep. 2008; 9: 2Crossref PubMed Scopus (83) Google Scholar). A “retraction index” (RI) for journals has been described (4Fang F.C. Casadevall A. Retracted science and the retraction index.Infect Immun. 2011; 79: 3855-3859Crossref PubMed Scopus (217) Google Scholar). The RI is derived by taking the number of papers retracted over the last 10 years, multiplying by 1000, and then dividing by the number of papers published in that journal over the same interval. The highest RIs are seen in the high-profile medical journals such as the New England Journal of Medicine, Lancet, Science, and Cell. These journals publish high-impact papers, are under close scrutiny, and have a relatively low denominator. Although the number of retractions is increasing fast, it is unclear whether this represents an increase in the problem or increased awareness; likely it is both. So journals are publishing much research of dubious worth, probably vastly more than the retraction rates indicate, but does the problem lie entirely with the investigators, or do the editors, publishers, and the current system of academic advancement also bear culpability? At its mildest, simply designing a poor study, with a methodology that cannot hope to address the hypothesis, and then “fishing” with subgroup analyses and shifting cut-points for a positive P value, is an ethical gray zone because investigators should simply know better. Such behaviors may or may not be intentional, and the review process, when applied properly, helps prevent inferior work in this category from finding its way into reputable journals. A study by Fang et al, however, shows us that the majority of retractions are the result of scientific misbehavior and not honest error (5Fang F.C. Steen R.G. Casadevall A. Misconduct accounts for the majority of retracted scientific publications.Proc Natl Acad Sci U S A. 2012; 109: 17028-17033Crossref PubMed Scopus (653) Google Scholar). Misbehaviors cross a spectrum, and I have written previously on the “unholy trinity” of fabrication, falsehood, and plagiarism (6Zietman A.L. Falsification, fabrication, and plagiarism: The unholy trinity of scientific writing.Int J Radiat Oncol Biol Phys. 2013; 87: 225-227Abstract Full Text Full Text PDF PubMed Scopus (16) Google Scholar). At the extreme end of the misbehavior spectrum, there can be complete and intentional fabrication of data to generate publishable results. When discovered, these may become high-publicity, even criminal cases. There are many current examples of fraudulence that include human cloning and the reporting of trials that never took place at all (7Gottweis H. Triendl R. South Korean policy failure and the Hwang debacle.Nat Biotechnol. 2006; 24: 141-143Crossref PubMed Scopus (58) Google Scholar, 8Maugh T, Mestel R. Key breast cancer study was a fraud. Available at: http://articles.latimes.com/2001/apr/27/news/mn-56336. Accessed December 30, 2016.Google Scholar). Behind the extremity of fabrication stands its little brother, falsehood. In this scenario, data are manipulated to “improve” the result. It takes protean forms, but common examples include exaggerating numbers in experimental groups to boost the significance of the data, and the manipulation of digital images. The latter is increasingly seen in the world of molecular biology, where blots can be cut, replicated, or reused to force or simulate a desired outcome. Plagiarism is the use of the words of others without attribution, and although probably not the most common form of misbehavior, is the most easily detected and the one we uncover most frequently at the International Journal of Radiation Oncology • Biology • Physics (the Red Journal). How much reproduction of text it takes to cross the line into plagiarism is not fixed and requires reading both the work of the plagiarist and the original source, and then considering context. At the Red Journal we use antiplagiarism software to compare all manuscripts received with the published literature. When similarities are found they are highlighted and the editors alerted. Almost all papers have an overlap of less than 15% with the published literature, and these are usually the aggregate of common phrases or materials and methods. Egregious plagiarism is usually in the 50% to 75% word match range. Such papers are now usually detected before review, and the likelihood that we will see such papers in print in the future has declined sharply. The issue of authorship is a troubling one for authors and editors alike. An author is “one who originates or creates,” and authorship is clearly defined by the International Committee of Medical Journal Editors (9International Committee of Medical Journal Editors. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Available at: www.icmje.org/icmje-recommendations.pdf. Accessed December 30, 2016.Google Scholar). Those who do not fulfill these unambiguous criteria should only be mentioned in the acknowledgements. Author lines are growing longer, a fact that may relate to the multidisciplinary nature and complexity of contemporary research, but which also likely relates to a culture in which the first author feels the need to pay back, or flatter, colleagues and pay respect to the head of his or her division or department. Flanagin et al (10Flanagin A. Carey L.A. Fontanarosa P.B. et al.Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals.JAMA. 1998; 280: 222-224Crossref PubMed Scopus (419) Google Scholar) reported that somewhere between 11% and 29% of those on the author line were undeserving. This is a difficult issue for editors to police, and although perhaps it does not represent a serious offense, it represents the fuzzy edge of good ethical behavior. If this goes unenforced within a department or laboratory it may act as a “gateway” to more troubling behaviors in the future. A new phenomenon, bogus peer-review, has arisen in recent years. To make their own lives easier, editors have for some years been offering authors the opportunity to suggest peer-reviewers for their papers and asking for these reviewers' e-mail addresses. Some unscrupulous authors have suggested names but given e-mail addresses they have created for the purpose (11Haugh C.J. Peer-review fraud—hacking the scientific publication process.N Engl J Med. 2015; 373: 2393-2395Crossref PubMed Scopus (110) Google Scholar). When reviews are solicited the authors then provide them in glowing and supportive terms. More than 300 papers have been retracted for rigged peer-review since 2012. Postsubmission misconduct, and misconduct around authorship, have been felt by some to be less grave than falsehood and fabrication because the science remains “unpolluted” by bogus results. Biagioli (12Biagioli M. Watch out for cheats in citation game.Nature. 2016; 535: 201Crossref PubMed Scopus (63) Google Scholar) has argued, however, that these “lesser” misbehaviors must be repetitive to achieve their goal of academic advancement. “Many academic fraudsters aren't aiming for a string of high-profile publications. That's too risky. They want to produce—by plagiarism and rigging the peer-review system—publications that are near invisible, but can give them the kind of curriculum vitae (CV) that matches the performance metrics used by their academic institutions. They aim high, but not too high.” Put another way, small misbehaviors, by their extent and number can widely undermine the academic culture. We should not be under any illusions that misbehavior is entirely the preserve of authors. Editors have a raft of self-serving behaviors of their own. At the very least they are responsible for the publication of large numbers of irreproducible papers based on poor methodology and “P value fishing.” Developing an effective peer-review process to weed out such papers is the editor's responsibility. Editors, however, are under their own pressures. There is a strong bias toward the publication of positive results because they are the most eye-catching. There is a perceived need to boost the impact factor of the journal. This may happen through the acceptance of weakly reviewed “positive” or controversial papers. It may also happen through a quiet policy of journal self-citation. In this the journal leans on the authors to cite papers published within its own pages during the impact factor “window” of the 2 previous years. This practice is considered self-promoting and distorts the validity of the metric (13Van Noorden R. Available at: http://blogs.nature.com/news/2013/06/new-record-66-journals-banned-for-boosting-impact-factor-with-self-citations.html. Accessed December 30, 2016.Google Scholar). If conducted flagrantly, journals can have their impact factor suspended, but editors are usually too artful to carry the practice this far. Again, as with the concerns regarding authorship expressed in the previous section, it is a gray behavior that, if ignored, begins to slowly erode the ethical foundation of the scientific publication system. Editors, like authors, have their own forms of extreme misbehavior. “Pay to cite,” or “citation reward,” programs are now being uncovered, as are “citation cartels.” In the latter, editors of 2 or more journals come together and quietly agree on a policy whereby they ensure that one another's journals are frequently cited (14Oransky I, Marcus A. Gaming the system, scientific cartels band together to cite each others' work. Available at: https://www.statnews.com/2017/01/13/citation-cartels-science. Accessed December 30, 2016.Google Scholar). These schemes are now being uncovered using big data tracking, through which inter-journal relationships can be numerically described. New changes in citation activity trigger alerts, just like those given when there is unusual activity on a credit card, and prosecutions have followed. The publication system has for nearly 4 centuries relied on big publishing houses to provide structure, aggregate manuscripts, facilitate review, and regularly publish what we have come to recognize as a print journal. In return for this service, the journal and the papers belonged to the publishing houses, which then recouped their costs, and ultimately profited, through the sale of subscriptions to individuals and libraries and from the sale of reprints. The information within the journals was readily available to those with access to medical libraries containing the relevant journals, but not to all. In the electronic era it is often said that “information wants to be free,” and it is certainly true that governmental agencies, charities, and patients would like immediate access to the information their tax dollars or benefactions have funded. This led in, 2002, to the Declaration of Budapest Open Access Initiative (15Budapest Open Access Initiative. Read the Budapest Open Access Initiative. Available at: http://www.budapestopenaccessinitiative.org/read. Accessed December 30, 2016.Google Scholar). Funding agencies agreed that, moving forward, they would encourage the publication of their sponsored research in journals that offered “open access.” Open access thus began as a noble concept to liberate data from behind publisher pay walls, and the research funders were prepared to underwrite the costs of publication within their grant awards. The publishing houses initially perceived this as a threat to their subscription-based business model and pushed back against it. This left a vacuum, soon recognized as a huge business opportunity, to be filled by smaller, often start-up, publishing houses. These companies combined the infinite “page space” of e-publishing with open access. Because there are no subscribers someone has to pay, and so the costs were shifted on to authors, who were charged an article processing charge, or APC, (up to $4000, with an average of $1200) for each manuscript accepted. The more manuscripts accepted, the more fees. Authors who might previously never have contemplated paying for publications were now willing to do so, either because of the demands of their funders (the minority), or because they work in “publish-or-perish” academic environments and were struggling to have their work accepted by conventional, selective, peer-reviewed journals. Some open access, online journals such as PLoS One have gone from strength to strength, and built prestige, by evaluating papers on the basis of methodology rather than results, and printing large numbers of studies both positive and negative. This has been a real service to science. Their APC goes toward real editorial services and formal review. Others, however, have either offered light review, or no review at all, and seem to be focused more on pocketing the APC while offering minimal editorial services. They have publishing offices that do not exist at the addresses given, editorial boards listing individuals who gave no consent, and names sufficiently close to those of reputable journals that confusion can easily result. They have also been known to lure in submissions through counterfeit impact factors. Investigators, in sting operations, have submitted nonsense articles that have been immediately accepted, exposing many of these journals as having no peer review whatsoever (16Bohannon J. Who's afraid of peer review?.Science. 2013; 342: 60-65Crossref PubMed Scopus (739) Google Scholar). This phenomenon has been called “predatory open access,” and the University of Colorado librarian Jeffrey Beall has meticulously documented its growth, although his website has recently “gone dark,” likely owing to threats and legal challenges (17Chawla DS. Mystery as controversial list of predatory publishers disappears. Available at: www.sciencemag.org/news/2017/01/mystery-controversial-list-predatory-publishers-disappears. Accessed December 30, 2016.Google Scholar). As of 2017 there are well over 1000 publishers that he had identified that fit these criteria. They have become so “noisy” that they are crowding out the reputable open access journals, which have been growing in number as traditional publishers enter the space but are difficult to distinguish from the others. The ethics of the open access concept are of the highest level, the ethics of the predatory publishers the lowest. It is as yet unclear how this situation will resolve. It is probable that, as traditional publishers offer open e-access for articles published within traditional journals (the “hybrid model”) the demand for new open access journals will decline. In addition, as traditional journals and medical societies create their own reputable open access sister journals, such as Advances in Radiation Oncology, that demand will decline further. At that point the business model will cease to be profitable to the predatory publishers, and they will move on to pastures new. Interestingly, the hybrid model is presenting its own ethical challenge because publishers, now collecting publication fees from authors, are still collecting their subscription fees and thus, it could be argued, “double dipping.” This has yet to be resolved. The issue of ethical integrity and the “reproducibility crisis” are not exactly the same thing but represent 2 sibling problems joined at the hip. Poorly reproducible data do not always result from malfeasance but do undermine the credibility and reputation of science. Editors have an ethical responsibility to do everything in their power to minimize this problem, whether it results from honest error or from a darker source. At present we have a reactive system. Antiplagiarism software is in place at most journals. The peer-review system, flawed as it is, acts as a filter, with poor or suspicious papers being detected, investigated, and if necessary, reported. The Committee on Publishing Ethics provides excellent guidelines for editors. Editors and reviewers must also ensure that studies adhere to their design and do not shift their recruitment numbers or endpoints to ensure a positive outcome. Prospective trials are now strongly encouraged to register at the outset, and only the endpoints for which the trial was designed will be reportable at conclusion. Some institutions employ blockchain technology to “lock” clinical trial objectives and thus prevent retrospective rewriting of objectives to better fit the data. The Red Journal has joined the other major medical journals in only publishing publicly registered prospective papers (18Palma D. Zietman A. Clinical trial registration: A mandatory requirement for publication in the Red Journal.Int J Radiat Oncol Biol Phys. 2015; 91: 685-686Abstract Full Text Full Text PDF Scopus (5) Google Scholar). The peer-review system can only function if it is conducted without prejudice. Editors have a responsibility to ensure that reviewers do not act out of bias and self-interest, and many journals, such as the Red Journal, now offer double-blind review to protect authors (19Jagsi R. Bennett K. Griffith K. et al.Attitudes toward blinding of peer-review and perceptions of efficacy within a small biomedical specialty.Int J Radiat Oncol Biol Phys. 2014; 89: 940-946Abstract Full Text Full Text PDF PubMed Scopus (22) Google Scholar). The Red Journal has also followed the journal Cortex in experimenting with pre-review. In this pilot model studies are reviewed for methodology and statistical design before they are performed (20Mell L. Zietman A. Introducing prospective manuscript review to address publication bias.Int J Radiat Oncol Biol Phys. 2014; 90: 729-732Abstract Full Text Full Text PDF PubMed Scopus (12) Google Scholar). If they pass muster they will be accepted regardless of their results, so long as the approved methodology was adhered to. This is a novel approach to reducing publication bias. Another initiative taken by some journals, such as the European Journal of Neuroscience, to reduce reviewer bias is to unblind the review process altogether and to publish the reviews together with the article. The idea is that, without a cloak of anonymity the power balance between author and reviewer is changed, and reviewers will take more care and responsibility. They will also get academic credit for the work they have done. Once papers are published there are currently 3 levels of postpublication evaluation. First, there are the readers and practitioners who discuss a paper publicly, write letters and editorials, or comment on Twitter. This community, in large part, determines the influence of published articles in practice. Second, there is a community of scientists who read manuscripts very critically, identify concerning inconsistencies, and, under a cloak of anonymity, call out suspicious papers. PubPeer has become a remarkable location for the expression of concerns, and many subsequent retractions, including one at the Red Journal, have had their origin here (21PubPeer. Home page. Available at: https://pubpeer.com. Accessed December 30, 2016.Google Scholar). Finally, Retraction Watch is a website that gathers the information behind all the retractions in the scientific literature (22Retraction Watch. Home page. Available at: http://retractionwatch.com. Accessed December 30, 2016.Google Scholar). It has revealed patterns of misbehavior, identified repeat offenders, and cast a spotlight on a troubling aspect of science previously managed quietly, if at all, within journals and universities. PubPeer and Retraction Watch remind investigators that a lack of integrity can have very visible consequences. Maintaining the integrity of science has to begin “at home” through good teaching and example. Junior investigators will follow those they admire. A research mentor, head of a laboratory, or departmental chair who insists on repeat experimentation, insists on seeking the advice of statisticians, insists on publishing one meaningful paper rather than dicing work into small papers for the sake of a CV, and who declines “honorary” authorship, sets a fine example and creates good habits. Training may be necessary. A younger generation, used to looking up data on line and to cutting and pasting, may simply not know what plagiarism is or where the line is drawn. The Israeli author Amos Oz wrote ironically, “If you copy from one man's book you are a plagiarist. If you copy from 10 men's books, you are a scholar. And if you copy from 30 men's books, you are a great scholar” (23Oz A. A Tale of Love and Darkness. Mariner Books, New York2005Google Scholar). When problems with published data are identified, a good mentor will insist on initiating an erratum, or even a retraction, and a good editor will be supportive and make the process as painless as it can be. When, in 2008, I realized that some work I had published in the Journal of the American Medical Association in 2005 contained statistical errors that required correction, I called the editor at the time, Catherine DeAngelis. She responded to my anxious explanation not with exasperation or rage but with the kind words, “I want to pin a medal on your chest”! She then helped me to complete an erratum and rapidly correct the scientific record (24Zietman A.L. Correction: Inaccurate analysis and results in a study of radiation therapy in adenocarcinoma of the prostate.JAMA. 2008; 299: 898-899Crossref PubMed Scopus (40) Google Scholar). Because of her support I now regard that erratum with as much pride as the original article. Correction needs to be freed from shame and to come naturally, rather in the way that blame-free safety reporting has been established in radiation oncology departments over recent years. While being supportive of those who correct errors, we must be equally intolerant of real misbehavior. Chairs, institutions, and their offices of research integrity will have to take a hard line to set examples. Federal funding agencies are now prosecuting and insisting on the return of misspent dollars. The same can also be true for departmental or institutional funds misspent. Jobs and academic advancement must be on the line. The academic culture that had led to this frenzy of publication must accept responsibility and must adapt. The impact factor is a measure of a journal's impact, not the impact of a single paper. It is sincerely hoped that promotion committees will learn to evaluate more critically the contribution of the candidate's body of work and their individual papers, rather than the number of publications or the impact factors of the journals in which they were published. New metrics, called Altmetrics, have been developed that log the number of downloads for a particular article and the amount of discussion that article engendered on social and conventional media. These, and other novel measures of the true worth of a piece of published research, need to be brought into the candidate's assessment. One highly discussed, practice-changing paper should carry far more weight than a multitude of “me too” publications in lightweight journals. The open access “bubble” is unlikely to burst, because it originated from a solid ethical core, but we must hope that the predatory open access model will. These myriad confusing journals make publication and CV boosting so easy that they devalue the coin. New open access journals growing out of major journals, and sharing solid review processes, are becoming established and will likely restore cosmos within the chaos. However, perhaps the best way to reduce the integrity issues increasingly associated with scientific journals is to simply move away from journals altogether (25Priem J. Beyond the paper.Nature. 2013; 495: 437-440Crossref PubMed Scopus (117) Google Scholar). In the basic sciences many investigators report their work in data repositories, on archive sites, or on their own websites, all publicly available and easily searchable. At Push and ASAPbio investigators can report “as they go,” accepting advice, discussion, and criticism, and choosing subsequent experimentation or rewriting in response to it (26Push. Home page. Available at: http://push.cwcon.org. Accessed December 30, 2016.Google Scholar, 27ASAPbio. Home page. Available at: asapbio.org. Accessed December 30, 2016.Google Scholar). The discussion is thus integrated into the paper, and every piece of work becomes a running scientific debate incorporating the reaction of the community. Archive sites such as arxiv.org and biorxiv.org gather, without review, preprints of papers (28arXiv. Home page. Available at: arxiv.org. Accessed December 30, 2016.Google Scholar, 29Cold Spring Harbor Laboratory. bioRxiv [home page]. Available at: biorxiv.org. Accessed December 30, 2016.Google Scholar). Although this concept antedated open access and aimed to bring science to a public space before it was locked behind a publisher's pay wall, it is now, with National Institutes of Health endorsement, becoming an end in itself. No libraries, no subscriptions, no open access fees. This is the post-journal world toward which we appear to be heading. If universities and their promotions committees recognize the validity of such an approach to science and give it their endorsement, the current outdated and usurped publication system will begin to crumble. Once a major oncology trials group, on an open access mandate, chooses not to publish in the New England Journal of Medicine or Lancet Oncology but on their own website or in, say, a National Cancer Institute repository, that will be the boost that oncology needs to make this jump into the future. The “clean” system of the future may well not involve scientific publications in their current form at all. It is unlikely that journals will die when scientific data start being routinely posted on publically accessible websites; they will simply take on a new function, which might be called “journalism.” Medical practitioners scarcely read most scientific articles within current journals. These readers need easily digestible, practice-relevant, review material. The current journals, including the Red Journal, will likely transition away from carrying original scientific articles and become places for debate. They will contain the scientific highlights, the digests, and the systematic reviews that the practitioner needs in his or her daily life. To paraphrase the monarchical cry of succession, “The journal is dead, long live the journal!”
Referência(s)