The Glacial Pace of Scientific Publishing: Why It Hurts Everyone and What We Can Do To Fix It
2012; Wiley; Volume: 26; Issue: 9 Linguagem: Inglês
10.1096/fj.12-0901ufm
ISSN1530-6860
Autores Tópico(s)Biomedical Text Mining and Ontologies
ResumoChild's Glacier, Cordova, Alaska. Photo courtesy of Mark Hoover. Why is it that in these days of instant information dissemination via blogs, Twitter, Facebook, and other social media sites, our scientific publishing system has ground to a medieval, depressing, counterproductive near-halt? An amusing anecdote relayed by Gerry M. Rubin, Howard Hughes Medical Institute vice president and executive director of Janelia Farm Research Campus, highlights the dramatic increase in data acquisition made possible by technology in the past 40 years. It took Rubin 1 year to sequence 156 nucleotides in the early 1970s, something that can now be accomplished in a millisecond—a 1010 increase in rate. The publication describing those 156 nucleotides was accepted without revision at the Journal of Biological Chemistry and published in <7 months in 1973 (and has been cited over 275 times) (1). The speed with which technology is accelerating the pace of science is truly breathtaking. Unfortunately, the reverse trend is evident in scientific publishing: we are decelerating rather than accelerating the pace of handling papers. As late as 1999, manuscripts were submitted in multiple hard copies and sent to the journal by FedEx, distributed by the journal to peer reviewers in hard copy, reviewer decisions faxed to the journal, and decisions returned to the author by fax. This sounds incredibly antiquated and slow, but the truth is that publishing was fairly rapid a few decades ago. The now-ubiquitous shift from paper to electronic web-based submission, review, and emailed decisions was designed to speed up all aspects of the peer-review process. Yet, a nonscientific survey of received/published dates that I carried out recently indicates that manuscript handling has become dramatically slower rather than faster over the past 2 decades. I selected five biological sciences articles and letters in two issues of Nature, published 20 years apart—May 30, 1991, and June 2, 2011. In the 1991 issue, the five papers surveyed were submitted between January 10, 1991, and March 25, 1991, and accepted between April 12, 1991, and April 23, 1991 (2). This works out to a time frame of 4–14 weeks from submission to acceptance 20 years ago. Papers were typically submitted and published within a few months—certainly in the same calendar year. Compare this with the situation in an issue of Nature from last year. Five selected biological sciences papers were submitted between March and October 2010 and accepted in March and April 2011 (3). Among these, the fastest time span from submission to acceptance was 18 weeks, and the others were 26, 27, 32, and a whopping 54 weeks. It says something that I resorted to an online pregnancy calculator to derive the time span that these papers languished in peer review. When scientific data take longer than human gestation to be made public, we really do have a crisis in our community! In fact, the situation with extreme delays in scientific publication is likely to be even worse than it appears from this informal and nonscientific survey. It is common practice at many journals to discard the date of initial submission and reset the submission counter to the final submission prior to a positive decision. Add to this the reality that many manuscripts are subjected to serial submission, rejection, and resubmission at multiple journals. This means that years not months can elapse between the initial submission at the first journal until the ultimate publication of the same paper at the final journal that accepts and publishes the work. For academic scientists, publication is the currency of career advancement. Manuscripts held hostage through multiple rounds of peer review can have major adverse effects on the careers of all scientists with young investigators being particularly vulnerable. If a paper is submitted but not in press, fellowship committees and faculty search committees will pass on a candidate—“she or he is applying too early” is the general consensus. In fact, she or he is not applying too early; the project is completed and the manuscript submitted, but the 6- to 12-month waiting period to get a paper accepted is never part of a career plan. For junior faculty, grants are not funded, and tenure is not granted—in part because the peer-review system is painfully slow. Faculty, whose labs would be thriving if we had a 1-month average time from submission to publication, see their careers endangered. The chilling effect of the glacial pace of publishing extends further to scientific conferences. Participants are afraid to talk about unpublished work, as they have no way of knowing at the time of abstract submission if the work will be safely in press by the time of the meeting. This leads to many meetings at which the majority of participants are silent about their accumulated unpublished data. This means that we all travel thousands of miles to get together for 4 days to listen to each other present out-of-date published work. We may as well stay home instead, play with our children, and spare the environment the carbon released into the atmosphere. Everyone involved on the editorial end of scientific publishing is a scientist—editors, authors, reviewers—and we ostensibly share the common goal of publishing high-quality papers for the community and for the tax-paying public that supports much of the research. A given individual in a given year is highly likely to be both author and reviewer. Many of us are also editors. Therefore, whereas it is tempting to blame the editor for being indecisive, the reviewer for being unreasonable, or the author for overinterpreting and being sloppy, we all bear responsibility for the problem. The process of peer review has been under intense scrutiny in recent years with some even questioning if the system improves manuscripts (4, 5). Dr. Hidde Ploegh of the Whitehead Institute at Massachusetts Institute of Technology recently published an interesting editorial about the problem of reviewer-requested experiments (6). This excellent piece elicited more than 40 reader comments on the Nature web site to date, most strongly supportive of Ploegh's suggestions to restrain the arbitrary power of anonymous reviewers to force authors to carry out endless additional experiments. Whereas I completely agree with Ploegh that reviewer experiments can be a terrible waste of time and resources, I think the blame for delays in scientific publishing is more appropriately placed on ALL participating parties—authors, reviewers, and editors. When I chatted about this topic recently with Gerry M. Rubin, he agreed, saying, “The issues with scientific publishing are really hurting the field and are disillusioning for people early in their careers. Unlike the funding issues, this is one in the control of scientists to fix.” Here, I propose three simple steps that each group of participants can take toward the goal of speeding up the process of publicizing scientific breakthroughs and reducing the despair of current scientific publishing. Authors need to have an ego-free look at their manuscript and send it to the most appropriate journal. Not every paper is meant to be published in the Big Three journals, but each paper has a natural fit for a particular journal. Stop playing the submission system like a lottery; this clogs the pipeline with papers that are not matched to the journal. Poor decisions in initial submission virtually guarantee that the paper will go through the free-falling and time-wasting process of being reviewed by multiple reviewers at progressively more specialized journals until it is eventually accepted somewhere. I sometimes handle papers for which The FASEB Journal is the second or third stop on the journey of peer review, and many reviewers I contact will recuse themselves because they saw the paper elsewhere and feel these authors deserve a fresh view from new reviewers. The problem is that in a small field, there is a small pool of appropriate or expert reviewers, and once the logical candidates bow out because of exhaustion, the authors are left with reviewers who may be less than perfectly suitable. All of this can be avoided by aiming to submit to the correct journal the first time around. Beyond the question of where to submit is the question of whether to submit at all. Authors should also ask themselves if the manuscript they are planning to write is really ready to publish at all or should be held until more data are available to say something substantial. I have received a few review requests in the last year for manuscripts that arguably ought not be published at all; for instance, someone reporting the sequence of a gene from a new species that is highly related to the same gene in many other species. Who cares? This is not a scientific advance that needs to be published unless additional supporting data enlarge the impact of the gene sequence. Other papers I was contacted to review obviously represented some preliminary work carried out by summer students over the course of a few weeks. Thin, incomplete, and conclusion-free work should not be published. Such manuscripts waste the time of all involved—authors, editors, and reviewers—and if anything, have a negative effect on the curriculum vitae of the authors if they somehow manage to be published somewhere. Please do not submit sloppy work. I review a lot of manuscripts, and it is depressing to get documents without page numbers that are badly written, with figures that are slapped together with ugly plots taken directly from the Excel default, and text that is missing adequate statistical or technical details to understand what was done. I have had a few cases recently where the authors left embarrassing notes-to-self in the text that were meant to be deleted before submission, such as “fix this part you idiot.” Sure, this is hilarious, but are they just too busy to reread the paper one last time before submission? My own attempt to wage war against sloppiness in manuscript submission is to refuse to review a manuscript if there are no page numbers. If I am an editor handling a nonpaginated paper, I return the manuscript to the authors immediately to have them paginate it. I do this because I respect the time my reviewers spend reviewing for me, and I find it nearly impossible to review such a manuscript myself. I am forced to refer to a problem on the “first paragraph of the fifth full page of the Results section, just after the section heading entitled X.” Our time as reviewers and editors is too valuable to waste on a technical omission that would have taken the authors exactly 10 seconds to fix prior to submission. Scientists need to become better graphic designers. Every scientist needs to own a copy of Edward Tufte's brilliant book on data visualization (7). Read it carefully, and implement his graphic-design suggestions. Too often, no apparent thought is put into the graphic design of scientific figures. Panels are thrown together without symmetry, low-resolution micrographs that convey no information are pasted in, different fonts are mixed and matched, and color is abused rather than being used to enhance clarity of presentation. The worst offense of all is authors who copy and paste raw graphs from Excel or Prism or other graphing programs and make no effort to edit them in the final figure. This makes the data hard to see through the clutter of what Tufte refers to as “chart junk,” that extraneous default “stuff” that graphing programs put on our data—distracting horizontal lines, excessive tick marks, legends with big boxes around them, etc. Finally, too many of us are just not good scientific writers and produce excessively wordy or blustery or simply unclear manuscripts that are a chore to read. Please find someone with good editorial skills and have him or her fix this. If you are submitting to an English-language journal but are not a native English speaker, please find a colleague who is a native English speaker or hire a good editing service to edit the syntax to make the manuscript easier to read. It is a pity if excellent science suffers in peer review merely because the paper is poorly written. By the time a decision is finally emailed to you, the editors and reviewers have put a huge amount of thought and effort into considering your manuscript. In my experience, if there is some support from reviewers, you will usually be asked for a revision. If all of the reviewers are against you, you will be given a hard rejection. Some journals are good at spelling this out by saying, as the Journal of Neuroscience does, “Reject—No Resubmission Allowed.” Yet, some authors still do not get the message and expend huge effort putting together a careful and lengthy rebuttal and contacting the editor to appeal the decision. This is a complete waste of time in 99.9% of cases. Your time is better spent reading the reviews carefully and revising the manuscript for another journal. In extreme cases, despondent authors take things a step further. Of these authors, I ask: please do not telephone the editor to yell at her or him or offer to have your important friends review the manuscript and send their opinions in to change the editor's mind. Life is too short to waste time on such fruitless pursuits. Accept the decision with dignity, and revise and resubmit elsewhere. Triaging papers, which you know just will not make it at your journal, will reduce the wasted time of putting papers that are just below the mark into peer review. Often these manuscripts are rejected anyway months later. It is best to move them along to the appropriate specialty journal right after submission by declining to review them. Scientific journals are proliferating these days, and there is certainly a good, immediate fit for virtually any manuscript at one of these journals, except those that contain so little information that they just should not be published at all. Current Biology does a brilliant job of being decisive at the presubmission stage. All presubmission inquiries are vetted within 1–2 days based on a one-page cover letter and an abstract, which are read by the editors and scientists on the editorial board with suitable expertise. It is at this stage that the journal asks if the paper is a suitable fit. Subsequent peer review only provides expert honing of the scientific content, not rejection as a result of a lack of fit for the journal. No editor can be or is expected to be an expert in all fields, so please reach out to an expert for her or his opinion if you are not certain something will be a good fit before peer review at your journal or if it should be accepted after peer review. Nature Neuroscience and PLoS Biology editors do this routinely by sending potentially appropriate manuscripts to experts for informal appraisal prior to triaging them or passing them along for peer review. Further, editors need to be willing to exercise judgment and intervene when reviewers are not unanimous. Scientific publishing should not be and cannot be contingent on a perfect consensus among all reviewers. A few editors I work with, including Katja Brose at Neuron and Tanguy Chouard at Nature, are really good about working with authors and seeking wider opinions if the reviewers are deadlocked. If a paper seems like a good fit, and the editors are willing to support it, it is usually very helpful to seek additional outside opinions if the existing reviewers are deadlocked. Sometimes it is as simple as an informal telephone call to a fourth party who has not been a reviewer to get her or his perspective. It is not unheard of that a negative reviewer is blocking the publication process out of less than perfectly rational or ethical reasons, and a fourth opinion can clarify these situations. The single-most important act an editor can perform to speed peer review is to avoid multiple rounds of review. Whereas for some reason, three or more rounds of revision seem standard at Nature and Cell, I am not certain this is necessary. In many cases, the editor herself or himself can get a good sense of whether the authors have addressed the original reviewers' concerns in the first revision. If so, the paper should be accepted at this stage. So, if the editor judges the revision to have satisfied all reviewer concerns, there is no need to go back to the reviewers. Even at the most efficient journals, each round of rereview adds at least 4 weeks to the process. As authors need months to carry out the experiments requested by reviewers at each round of revision, there is the increasing danger that the original reviewers drop out if the process drags on to the second, third, or even fourth revision. Reviewers can be unavailable at later stages because of sabbatical leave, teaching obligations, maternity leave, a lab move, retirement, or in extreme cases, death. This means that new reviewers must be found to replace them, invariably leading to these new reviewers finding altogether new flaws in the manuscript that the authors have not yet addressed. Such situations are to be avoided at all costs. The new journal eLife, which is being launched by three funding organizations—Howard Hughes Medical Institute, Max Planck Society, and Wellcome Trust—is promising to do just this; in the words of the Editor-in-Chief Randy Schekman: “We are streamlining the review process, eliminating unnecessary requests for revision and cycles of review, and having reviewers and editors consult to provide a consolidated view of their comments in the decision letter” (http://www.elifesciences.org/the-journal/scope). I very much hope that this approach will work to reduce the time and increase the transparency of peer review and that other journals follow in the steps of eLife. I think that we as reviewers have lost sight of the important fact that the manuscript under review is not our work. Whenever the paper is ultimately published, we do not sign the paper, and no public record exists that we served as reviewers. Our role is to advise the authors, to help them avoid publishing something incomplete, wrong, or flawed. Ultimately, the authors are responsible for the content in the paper, and they bear the responsibility of being right. In my own experience, reviewers sometimes fixate on very minor details that have not been addressed by experiments, usually because the details are tangential to the story that we as authors are trying to tell. Authors can find themselves wasting months of time carrying out the kind of “reviewer experiments” that Hidde Ploegh wrote about (6) just to appease such reviewers who seem to feel the story is about this tangential point. This is the price we authors pay to publish, but I rarely feel the delay in getting our story out there is worth it. Let's face it: we are only human, but we must abide by some common-sense rules to avoid conflict of interest in peer review. All of what I am about to say is completely obvious but worth saying nonetheless. Each of us has one or more scientific colleagues whom we just viscerally and irrationally dislike and whose work we are not able to judge objectively as a consequence of this aversion—someone who stole your girlfriend, who humiliated you at a conference, who scooped you, who questioned the findings in one of your own papers, whatever. If you are asked to review a paper from one of these people, please always decline. It is just not fair to allow personal vendettas to taint the already flawed process of anonymous peer review. Similarly, if you are asked to review a paper that directly competes with your own work, decline before even looking closely at the abstract, and delete the email. There are terrible rumors out there of people agreeing to review a manuscript closely related to their own work, downloading it, distributing it to the lab, appropriating details and data to advance their own research, and then rejecting the manuscript to buy their own group more time to publish a competing paper. This kind of ugly behavior, if it actually does occur, makes me sad for my profession. It should not be happening. It is no different from insider trading in financial circles, and insider traders go to jail when caught and convicted. We have no mechanism to catch and punish scientists who cheat in this way, but in my mind, those guilty of using peer review to steal data to advance their own research should be in jail with their dishonest hedge-fund peers. Finally, if an editor makes an honest mistake and sends you a manuscript to review from a recently departed student or postdoc or someone with whom you regularly collaborate, decline the review, and notify the editor promptly of the conflict. Just as we should not steal from our enemies, we should not be helping our friends publish their work by being noncritical and overly generous peer reviewers. Peer review is only as fast as the slowest reviewer. Many journals exhort, cajole, even telephone reviewers to get them to turn in late reviews. Let's face it: it does not take 4 weeks to review a paper. We all agree to review it and then ignore the paper until the third reminder email reminds us of our review obligation. At that time, we download, read, and spend 1 or 2 days thinking about the paper before writing the review. By this time, it may be 1 week or 1 month past the deadline, and the editors are starting to telephone the office every other day to get the review submitted. I am more-than-sometimes guilty of this terrible behavior and realize that I am contributing to the stalling of the process when I do so. Please only agree to review a manuscript if you know you will be able to turn in the review within a few days of the deadline. If each scientific author, editor, and reviewer follows each of these three suggestions starting today, we might just be able to get papers accepted within 4 weeks of submission. This would be an amazing change in the scientific publishing culture and a great gift to young scientists just entering the system.
Referência(s)