The Wisdom of Crowds, the Madness of Crowds: Rethinking Peer Review in the Web Era
2010; Elsevier BV; Volume: 57; Issue: 1 Linguagem: Inglês
10.1016/j.annemergmed.2010.11.012
ISSN1097-6760
Autores Tópico(s)scientometrics and bibliometrics research
ResumoThe case for editorial peer review, in the abstract, appears unassailable: it makes the difference between order and anarchy in the scientific literature. Without expert prepublication refereeing, a flood of sloppy methodology, unreadable or misleading prose, wishful thinking, half-truths, and outright falsehoods would overwhelm the reliable reports on which scientific progress and sound clinical practice depend. It is indispensable for sorting out credible reports from polemics advocating trepanation, astrology, or flat-earthism. Actual peer review practices, history and scholarship suggest, do not consistently live up to this ideal. “Peer review is supposed to be what determines the quality of science,” says Annals editor in chief Michael L. Callaham, MD, “and yet we know nothing about it.” Even less is known about the concept of open review, or so-called crowdsourcing. A recent open-review experiment by Shakespeare scholars and the successful collective solution of a mathematical problem1Cohen P. Scholars test Web alternative to peer review.New York Times. August 24, 2010; (Accessed September 28, 2010): A1http://www.nytimes.com/2010/08/24/arts/24peer.html?sq=scholars%20web&st=cse&scp=1&pagewanted=allGoogle Scholar attracted mass-media attention to this practice's broader potential. But it may be too early to tell whether it could ever challenge the more traditional model. In some views, the conventional peer review with reviewer anonymity is conducive to candor; others find it corrosive to accountability. Peer review is either the key to meritocracy, purifying science of commercial, political, and other irrelevant pressures, or the mechanism by which an old boys' network preserves its half-earned authority. Some justify the extraordinary efforts devoted to peer review, often thankless and usually uncompensated, by invoking its role in quality control; others find the practice riddled with incompetence, conflict of interest, interpersonal strife, assorted biases (including pervasive bias toward positive results, along with predictable personal leanings), and occasional intellectual property theft.2Resnik D.B. Gutierrez-Ford C. Peddada S. Perceptions of ethical problems with scientific journal peer review: an exploratory study.Sci Eng Ethics. 2008; 14 (Published March 1, 2008. Accessed September 28, 2010): 305-310https://doi.org/10.1007/s11948-008-9059-4http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2642979/Crossref PubMed Scopus (73) Google Scholar, 3Bad peer reviewers [editorial].Nature. 2001; 413: 93Crossref Scopus (1) Google Scholar, 4Engber D. Quality control: the case against peer review.Slate. April 5, 2005; (Accessed September 28, 2010)http://www.slate.com/id/2116244/Google Scholar, 5Judson H.F. The problems of peer review.in: The Great Betrayal: Fraud in Science. Houghton Mifflin, New York, NY2004: 244-286Google Scholar On one point, the system's defenders and critics agree. It does nothing to prevent fraud, that periodical stain on the research community's reputation.6Berger E. Peer review: a castle built on sand or the bedrock of scientific publishing?.Ann Emerg Med. 2006; 47 (Accessed September 28, 2010): 157-159https://doi.org/10.1016/j.annemergmed.2005.12.015http://www.annemergmed.com/article/S0196-0644%2805%2902102-5/fulltextAbstract Full Text Full Text PDF PubMed Scopus (6) Google Scholar These problems are often, though not always, intertwined with what Judson7Judson H.F. Structural transformations of the sciences and the end of peer review.JAMA. 1994; 272 (Accessed September 29, 2010): 92-94http://www.ama-assn.org/public/peer/7_13_94/pv3112x.htmCrossref PubMed Scopus (50) Google Scholar called “the contradiction that makes peer review possible at all. . . . that the persons most qualified to judge the worth of a scientist's grant proposal or the merit of a submitted research paper are precisely those who are that scientist's closest competitors.” “It's basically a 200-year-old process that was developed by English [and Dutch] country gentlemen,” Dr. Callaham continues, “at a time when there would be maybe 30 or 40 other people in the world [with whom] you could have an intelligent discussion. . . . It really didn't take hold until after World War II; before that, most of the science that you read was not really peer reviewed.” Some trace peer review as far back as Aristotle8Barnes J. Proof and syllogism.in: Berti E. Aristotle on Science: The Posterior Analytics, Proceedings of the Eighth Symposium Aristotelicum. Editrice Antenore, Padua, Italy1981: 17-59Google Scholar; institutionalized by the Royal Society's Philosophical Transactions,9Zuckerman H. Merton R.K. Patterns of evaluation in science: institutionalization, structure and functions of the referee system.Minerva. 1971; 9: 66-100Crossref Scopus (512) Google Scholar it became standard practice in the postwar era, despite the massive increases in the numbers of scientists and scientific publications during those years and the enduring problem of recruiting capable reviewers. “Everybody uses it and relies on it, and yet nobody had studied it,” says Dr. Callaham. “The method that selects science ought to, itself, be scientifically examined and proven.” Moreover, the dramatic expansion of access to scientific articles through the World Wide Web and the examples set by other disciplines, in which preprints are broadly circulated before editorial refereeing—sometimes bypassing that step entirely—pose new challenges to the peer review system. The scalability of electronic communications not only speeds and coordinates editorial communications but makes open review practical, at least in some disciplines. Advocates of “crowdsourced” or “Web 2.0” review, either before or after publication, claim that such a procedure is preferable on grounds of equity, transparency, and perhaps review quality, as well as more obvious online features such as speed or range of opinion. Defenders of traditional peer review, encountering arguments that it serves only the interests of established publishers and professional societies,10Harnad S. Arnold Relman's NEJM editorial about NIH/E-biomed American Scientist Forum (listserv posting), July 19, 1999.http://listserver.sigmaxi.org/sc/wa.exe?A2=ind99&L=american-scientist-open-access-forum&D=1&F=l&P=20403Google Scholar may find themselves in a position of relying on a body of evidence that is far from conclusive. Biagioli,11Biagioli M. From book censorship to academic peer review.Emergences J Study Media Composite Cultures. 2002; 12: 11-45Crossref Google Scholar professor of the history of science at Harvard, has linked the rise of peer review by the 18th-century precursors of today's scientific organizations, specifically the Royal Society and the French Académie des Sciences, to 17th-century practices more closely resembling the imprimaturs conferred by state censors. To some contemporary advocates of open review, the conventional system still bears certain traces of its origins among disciplinary practices in a somewhat sinister sense, the antithesis of a reliable merit-based filter. With open review arrangements still fairly rare in biomedical research, and crowdsourced review (a process distinct from open expert review) rarer, it may be too early to determine whether the electronic alternatives gaining popularity in other fields offer appropriate advantages for physicians. Yet open review is already incorporated in one form into the procedures of one globally prominent journal and is critical to the mission of that journal's innovative imminent spinoff project. It raises questions that focus attention on the assumptions and uncertainties surrounding a practice considered central to both the conduct of science and the construction of scientific communities. In the 1980s, peer review became an object of study in its own right. Annals, under Dr. Callaham's leadership, in regard to peer review as a critical intellectual problem, as well as a regular component of editorial procedures, has actively participated in such research.12Callaham M.L. Research into peer review and scientific publication: journals look in the mirror.Ann Emerg Med. 2002; 40 (Accessed September 28, 2010): 313-316http://www.annemergmed.com/article/S0196-0644%2802%2900045-8/fulltextAbstract Full Text Full Text PDF PubMed Scopus (6) Google Scholar It provided training for reviewers and monitored its own reviewers' performance.13Callaham M.L. Knopp R.K. Gallagher E.J. Effect of written feedback by editors on quality of reviews: two randomized trials.JAMA. 2002; 287 (Accessed September 28, 2010): 2781-2783http://jama.ama-assn.org/cgi/content/full/287/21/2781Crossref PubMed Scopus (49) Google Scholar, 14Green S.M. Callaham M.L. Current status of peer review at Annals of Emergency Medicine.Ann Emerg Med. 2006; 48 (Accessed September 28, 2010): 304-308http://www.annemergmed.com/article/S0196-0644%2806%2901015-8/fulltextAbstract Full Text Full Text PDF Scopus (5) Google Scholar After some 25 years' worth of investigation into a topic notoriously resistant to analysis,15Jefferson T. Wager E. Davidoff F. Measuring the quality of editorial peer review.JAMA. 2002; 287 (Accessed September 28, 2010): 2786-2790http://jama.ama-assn.org/cgi/content/full/287/21/2786Crossref PubMed Scopus (177) Google Scholar says Dr. Callaham, the consensus emerging from “a handful of decent studies, less than half a dozen” is that peer review does help improve articles, but not enormously, and that its gatekeeping effect is overrated. Even glaring errors in studies frequently slip through. Annals is one of several journals that have tested their reviewers16Baxt W.G. Waeckerle J.F. Berlin J.A. et al.Who reviews the reviewers? feasibility of using a fictitious manuscript to evaluate peer reviewer performance.Ann Emerg Med. 1998; 32 (Accessed September 28, 2010): 310-317http://www.annemergmed.com/article/S0196-0644%2898%2970006-X/fulltextAbstract Full Text Full Text PDF PubMed Scopus (97) Google Scholar, 17Nylenna M. Riis P. Karlsson Y. Multiple blinded reviews of the same two manuscripts: effects of referee characteristics and publication language.JAMA. 1994; 272 (Accessed September 29, 2010): 149-151http://jama.ama-assn.org/cgi/content/abstract/272/2/149Crossref PubMed Scopus (124) Google Scholar, 18Godlee F. Gale C.R. Martyn C.N. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial.JAMA. 1998; 280 (Accessed September 28, 2010): 237-240http://jama.ama-assn.org/cgi/content/full/280/3/237Crossref PubMed Scopus (260) Google Scholar, 19Schroter S. Black N. Evans S. et al.What errors do peer reviewers detect, and does training improve their ability to detect them?.J R Soc Med. 2008; 101 (Accessed September 28, 2010): 507-514https://doi.org/10.1258/jrsm.2008.080062http://resources.bmj.com/bmj/about-bmj/about-bmj/evidence-based-publishing/What%20errors%20do%20peer%20reviewers%20detect.pdfCrossref PubMed Scopus (149) Google Scholar by circulating fictitious articles with deliberately inserted flaws, Dr. Callaham reports, finding that “basically, peer reviewers did dreadfully . . . they missed at least half of both major and minor errors.” Problems include determining whether conclusions follow from results, detecting bias, and citing sources accurately. “These were mostly the bigger, better journals,” he adds, “journals that actually cared enough about it to invest the time and trouble to do the study. … Their results, which are pretty discouraging, are the best of the best.” The quadrennial International Congresses on Peer Review and Biomedical Publication, inspired and organized by Journal of the American Medical Association deputy editor Drummond Rennie, MD, have driven much of the scholarship in this area. At the outset of this project, Dr. Rennie called attention to one fallacy about peer review's success at keeping weak articles out of the discourse: “One trouble is that despite this system, anyone who reads journals widely and critically is forced to realize that there are scarcely any bars to eventual publication. There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for an article to end up in print.”20Rennie D. Guarding the guardians: a conference on editorial peer review.JAMA. 1986; 256: 2391-2392Crossref PubMed Scopus (94) Google Scholar Though the existing review system undoubtedly blocks some of the worst such articles, the ideal system would filter out—or at least drastically improve, through the interactions of editors and authors—the majority, rather than simply shunting them from more prestigious journals to lesser ones. Peer review as usually practiced, one can infer from Dr. Rennie's observation, does not so much perform gate keeping as triage. In 1998, Dr. Rennie's opening address to the third International Congress expressed the desire for anonymous peer review to join anonymous authorship on the scrap heap of history, replaced by a “fully open” system identifying reviewers, as well as authors, grounded in a belief “that openness strengthens the link between power and accountability.”21Rennie D. Freedom and responsibility in medical publication: setting the balance right.JAMA. 1998; 280: 300-302Crossref PubMed Scopus (49) Google Scholar A dedicated theme issue of JAMA after the fourth Congress, illustrated with a cartoon of Dr. Rennie as Moses leading colleagues through the desert,22Rennie D. Flanagin A. This week in JAMAPeer Review Congress IV: a JAMA theme issue.JAMA. 2002; 287 (eds) (Accessed September 28, 2010): 2749http://jama.ama-assn.org/cgi/content/full/287/21/2749Crossref Scopus (54) Google Scholar included several assessments that conventional peer review was not yielding demonstrably superior scientific results, along with a call for open procedures by Fiona Godlee, BSc, MB BChir, MRCP, of Biomed Central (later editor in chief of the British Medical Journal [BMJ]) on multiple grounds: ethics, feasibility, lack of adverse effects, and a balance of accountability and credit for reviewers' work. The BMJ initiated a form of open review in 1999,23Smith R. Opening up BMJ peer review.BMJ. 1999; 318 (Accessed September 28, 2010): 4-5http://www.bmj.com/content/318/7175/4.fullCrossref PubMed Scopus (136) Google Scholar identifying reviewers to authors (though not to readers) at the same time that it published a report24van Rooyen S. Godlee F. Evans S. et al.Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial.BMJ. 1999; 318 (Accessed September 28, 2010): 23-27http://www.bmj.com/content/318/7175/23.fullCrossref PubMed Scopus (308) Google Scholar indicating that this variable had no effect, positive or negative, on review quality. Richard Smith, MD, MS, the BMJ's editor in chief at the time, argued that the burden of defensibility should rest on conventional anonymous procedures, not on the newer system, and that the likely gains in ethics and civility would outweigh potential losses of young reviewers out of fear that signed reviews might damage their careers. He also conjectured that “we may move to a system where authors and readers can watch the peer review system on the World Wide Web as it happens and contribute their comments. Peer review will become increasingly a scientific discourse rather than a summary judgment.” Dr. Smith, now a board member at the Public Library of Science, has followed research on the subject over the years and grown skeptical toward peer review as an institution.25Smith R.W. In search of an optimal peer review system.J Participat Med. 2009; 1 (Accessed September 29, 2010): e13http://www.jopm.org/opinion/2009/10/21/in-search-of-an-optimal-peer-review-system/Crossref Google Scholar He has written that “Peer review might disappear because its defects are so much clearer than its benefits. It is slow, expensive, profligate of academic time, highly subjective, prone to bias, easily abused, poor at detecting gross defects, and almost useless for detecting fraud.”26Smith R. The future of peer review.in: Godlee F. Jefferson T. Peer Review in Health Sciences. 2nd ed. BMJ Books, London, England2003: 329-346http://resources.bmj.com/bmj/pdfs/smith.pdfGoogle Scholar In the absence of conclusive evidence for its value, except to allocate scarce journal space, and in the awareness that digital publishing is not subject to the same scarcity constraints as print, Dr. Smith sees no serious objection to reversing the traditional procedural sequence in which closed review precedes publication. Putting publication first and letting review follow is an intriguing act of faith on several levels: in the purported wisdom of crowds, in potential contributors to choose a publication method that might expose their work's flaws to general scrutiny, in reviewers to balance courtesy with useful correctives, and in readers to find the whole enterprise worthy of attention. In the fall of 2010, the BMJ plans to launch a new online publishing venture, BMJ Open, inviting submissions geared toward medical research in any therapeutic area, though excluding clinical case reports, and welcoming both high- and low-impact studies of any size.27BMJ Open.http://blogs.bmj.com/bmjopen/Google Scholar BMJ Open will place peer review documents in public view once articles are accepted, require reviewers to sign their comments, and present all material to anyone with Internet access, free from subscriber pay walls. It will operate alongside the conventional BMJ, covering expenses through an author-pays model (waived in cases in which institutional support is unavailable) and publishing work that has not found an outlet elsewhere, including in BMJ itself. In an effort to optimize readers' direct access to evidence for independent analysis, it encourages public presentation of raw data sets. The distinction between open and crowdsourced review is important. BMJ Open adheres to the former review model. “Anything published will have been peer reviewed in the ‘usual’ way,” says managing editor Richard Sands, “ie, reviewed by external peer reviewers via an editorial office. So anything accepted for publication will have been through a formal peer review procedure, ‘open’ to its participants but not the public. If the article is accepted, then the prepublication history (previous versions, peer review comments, and author replies) will be made public, alongside the final typeset and proof-checked manuscript. So we are not crowdsourcing reviews to determine publication.” BMJ deputy editor Trish Groves, MBBS, MRCPsych, notes that other journals, including PLoS Currents, use a community peer review process but comments that Nature's 2006 experiment with public review along with standard peer review28Greaves S. Scott J. Clarke M. et al.Overview: Nature's peer review trial [editorial].Nature. 2006; (Accessed September 30, 2010)https://doi.org/10.1038/nature05535http://www.nature.com/nature/peerreview/debate/nature05535.htmlCrossref Google Scholar was “largely unsuccessful.” Few authors agreed to participate (only 5% of those invited), numbers of page views and comments were small, and editors likened their efforts to obtain comments to “pulling teeth.” About the same time Dr. Rennie, Dr. Smith and colleagues, and others were calling for revised review processes, the Medical Journal of Australia (MJA) became one of the first biomedical journals to experiment with a form of dynamic online peer review.29Bingham C.M. Higgins G. Coleman R. et al.The Medical Journal of Australia Internet peer review study.Lancet. 1998; 352 (Accessed September 29, 2010): 441-445https://doi.org/10.1016/S0140-6736(97)11510-0http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2897%2911510-0/fulltextAbstract Full Text Full Text PDF PubMed Scopus (45) Google Scholar With authors' and reviewers' consent, the journal electronically published 56 articles that had already been reviewed and accepted, along with the reviewers' reports and selected e-mail comments from readers. The MJA's Web site thus became a publicly scrutinized space in which authors could reply or revise their articles in response to readers' reactions. After an open-review stage lasting a median of 10 weeks, articles were copyedited and published in the print journal as before. Majorities of both authors (81%) and reviewers (92%) approached for the project consented to it, and 62% of participating reviewers were willing to sign their reviews; the others chose to retain anonymity, often because of their institutions' preference. Reviewer performance scores did not significantly differ from prestudy scores, though prestudy outlier scores, both high and low, moved closer to the mean. Of 52 open-review comments, largely short and specific, 29% led to authorial changes affecting 7 articles. These numbers are relatively small, and the research involved was not a random sample; the editor withheld certain articles from the study for various reasons (to link them to editorials, to give all readers simultaneous access, or because resource limits constrained workflow). Nevertheless, these results suggested that open review is palatable to participants, comparable to conventional private procedures in review quality, and occasionally improves articles; the experience set an important precedent. MJA deputy editor Bronwyn Gaut, MB BS, DCH, DA, reports that a follow-up study was planned30Bingham C. Van Der Weyden M.B. Peer review on the Internet: launching eMJA peer review study 2.Med J Aust. 1998; 169 (Accessed September 29, 2010): 240-241http://www.mja.com.au/public/issues/sep7/bingham/bingham.htmlPubMed Google Scholar but abandoned for reasons unknown. The journal adopted all-electronic (though not open) review procedures in 2005 and maintains a rapid-publication section31http://www.mja.com.au/public/rop/contents_rop.htmlGoogle Scholar for fast-tracked articles. In subsequent reflections32Bingham C. Peer review on the Internet: a better class of conversation.Lancet. 1990; 351 (Supplement Internet guide.) (Accessed September 28, 2010): 10-15http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(98)90307-5/fulltextAbstract Full Text Full Text PDF Google Scholar on this and related endeavors, former MJA communications development manager Craig Bingham places his journal's initial venture in the context of efforts in multiple fields (from physics and environmental science to psychology and cultural studies) to transform their peer review methods from a black-box process to discussion formats with various levels of openness. Each has its pros and cons, and Bingham's report acknowledges field-specific drawbacks, including, in medical e-journals, clashes with publication policies and publicity embargos. Some electronic review methods merely replicate existing procedures, accelerating editorial communications without substantively transforming them. Some merge the editorial process with responses that would otherwise appear in postpublication commentary (a form of extended peer review traditionally conducted through letters to editors and subsequent separate studies) so that preliminary data and reports can attract peer contributions and shape the final report. In Web journals on rhetorical theory33RhetNet.http://wac.colostate.edu/rhetnet/Google Scholar and other fields that prize collective experimentation over delineation of individual contributions, distinctions between authorship and dialogue blur entirely; for example, Rhetnet, “a dialogic publishing [ad]venture,” according to its Web site, exploring what net publishing might be “in its ‘natural’ form.” This led Bingham to comment that “Rhetnet does not seem to have a peer review process so much as be a peer review process. It is a method quite alien to biomedical journals, but not unlike a scientific meeting or the consensus processes of a working group.” Other disciplines, beginning with high-energy physics, have moved to a self-publication model based on the circulation of electronic preprints (the “e-prints” found on the arXiv.org system developed by physicist Paul Ginsparg at Los Alamos National Laboratory, now hosted at Cornell), again dissolving distinctions among reviewers, authors, and readers. Some claim that these developments, combined with the changing economics of publication, imply that the entire journal format is approaching extinction,34Odlyzko A.M. Tragic loss or good riddance? the impending demise of traditional scholarly journals.Intern J Human Computer Studies. 1995; 42 (Accessed September 29, 2010): 71-122https://doi.org/10.3217/jucs-000-00-0003http://www.jucs.org/jucs_0_0/tragic_loss_or_good/Odlyzko_A.htmlCrossref Google Scholar but Bingham and others note that the e-print approach might not translate smoothly from the close-knit communities well versed in the abstractions of mathematics and physics to the clinical fields, where prematurely accessible information would find a much wider audience. Formats used by MJA, the Cochrane Collaboration, and other biomedical enterprises extend the commentary process but neither dispense with structured peer review nor leave the process so open that the end product of a complete published article becomes unrecognizable. Features resembling the arXiv self-publishing model appeared in the E-Biomed proposal by Harold Varmus, MD, in 1999 but disappeared by the time this proposal morphed into PubMed Central, which preserves the roles of traditional publishers and peer reviewers rather than gives the public free access to all pre- and postpublication materials.35Kling R. Fortuna J. King A. The remarkable transformation of E-Biomed into PubMed Central. Indiana University Center for Social Informatics, 2002http://rkcsi.indiana.edu/archive/CSI/WP/wp01-03B.htmlGoogle Scholar In a 2007 blog entry that Dr. Groves cited in a presentation to the Council of Science Editors (an instance affirming the occasional professional value of the Web's volunteer-driven infosphere), freelance editor Matt Hodgkinson36Hodgkinson M. Open peer review & community peer review.http://journalology.blogspot.com/2007/06/open-peer-review-community-peer-review.htmlGoogle Scholar offered a typology of review systems along a closed-to-extremely-open continuum: traditional anonymous review; open (named) prepublication review with the option of reader comments; open and permissive review, with author-solicited reviews as in Biomed Central's Biology Direct; community review, or true crowdsourcing, as tried briefly by Nature but used with more success elsewhere; permissive review with postpublication commentary; and postpublication commentary with no review. The last of these represents the purest expression of faith in unmediated crowdsourcing, as in the general academic site Philica, “where ideas are free,” as its slogan holds, but also where frank pseudoscience37Kelly A. The intelligent design of the cosmos.http://philica.com/display_article.php?article_id=50Google Scholar has proliferated. Medical editors and reviewers agree that different systems suit different fields. “At these Peer Review Congresses,” says Dr. Callaham, “there's usually a pretty wide array of disciplines represented, and the math and physics guys always kind of look at us like, ‘What’s your problem?'” Medicine's slower adoption of open online review puzzles them, yet the distinction may not reflect institutional conservatism so much as the different kinds of complexity and uncertainty encountered in nonclinical and clinical sciences. “Actually,” Dr. Callaham comments, “math is simple compared to real life.” Gregory W. Hendey, MD, professor of clinical emergency medicine at the University of California, San Francisco and a regular reviewer for Annals, concurs. “I don't mean to simplify math or physics,” he says, “but I think in many basic sciences you can study things in a much more controlled laboratory setting and get black-and-white answers much more easily than you can studying how patients respond in a clinical setting. And if things are more consistent and black and white, you probably could get more consistent comments in an open forum than you could for a medical question. I'm not saying there's not a place for it in medicine; it just seems to me that the disadvantages of a purely open system would greatly outweigh any advantage.” “There are some problems or issues with the current style of the peer review process,” Dr. Hendey continues, “but I'm not sure that open peer review fixes any of those. It may address some of the issues, but it may create other problems of its own.” One may be to exacerbate an existing problem: finding reviewers with the desired combination of content expertise, methodological knowledge, communication skills, and ability to commit time. Too few journals, Dr. Hendey notes, take the trouble that Annals and others do to orient and train reviewers or to provide dedicated reviewers for statistics and other methodological specialties. In the online environment, he conjectures, “you might get lots of comments from people who like the paper or dislike the paper for whatever reason, but they may not have the background or experience or expertise to really make a valuable critical assessment of the paper. … On the plus side, you get lots of opinions; on the minus side, you're not sure how many of those opinions really count.” Quoting his colleague W. Richard Bukata, MD, clinical professor of emergency medicine at the University of Southern California, Dr. Hendey offers a useful metaphor: “Three s
Referência(s)