Artigo Acesso aberto Revisado por pares

Peer review: A castle built on sand or the bedrock of scientific publishing?

2006; Elsevier BV; Volume: 47; Issue: 2 Linguagem: Inglês

10.1016/j.annemergmed.2005.12.015

ISSN

1097-6760

Autores

Éric Berger,

Tópico(s)

Health and Medical Research Impacts

Resumo

Albert Einstein’s revolutionary 1905 manuscript Annus Mirabilis wasn’t peer reviewed. Neither was Watson and Crick’s 1951 paper on DNA structure. Conversely, the work of Jan Hendrik Schön was peer reviewed and only later discovered to be a spectacular fraud.Schön, a former Bell Labs scientist, authored or co-authored one research paper every 8 days in 2001. An astonishing 15 of Schön’s papers were accepted for publication in Nature and Science, 2 of the most respected and influential journals in the scientific community. But after sufficient questioning by other physicists, Schön was proved a fake.Some scientists, and certainly the general public, found it disturbing that myriad peer reviewers did not catch the fabrications. But the reality is that peer review assumes a research article has been honestly written. It doesn’t catch outright fraud. Some critics say it also promotes favoritism, stifles creative new ideas and fails to promote cooperation among scientists. Furthermore, no scientist has proven that peer review actually works—that it improves the quality of published research.A palace on a pinSo here stands this institution which cannot lay claim to some of the biggest paradigm shifts in modern science and must admit to many more scandals than Schön’s. And yet its influence in the scientific community is profound. It is the very foundation of funding, publication and career promotion, but on close examination this bedrock has more than a few cracks.While the idea of peer review dates back several centuries, it became the standard for scientific publication only after World War II, when a groundswell of research began overwhelming journal editors. Today, no scientific work is taken seriously unless it has been vetted by a panel of experts in the field. To encourage candor from these volunteer referees, reviews have been blinded, meaning authors don’t know who the referees are. Despite its somewhat arbitrary nature and dependence on reputation and word of mouth, peer review has become the one and only path to scientific success, the sine qua non of an impressive curriculum vitae.Twenty years ago, Dr. Drummond Rennie, a deputy editor of the Journal of the American Medical Association, began asking why so little was known about such a critical, widely used tool. He decided to call for a congress of scientists to discuss the topic. A JAMA cartoon would later depict him as a Moses leading fellow scientists through the wilderness.“Pretty quickly it became clear a lot of people shared my anxieties about this topic,” Rennie said. “Anyone can chat about a problem, but talk is cheap. And at that time there was very little or no research on peer review.”That is no longer the case. This past September in Chicago, Rennie concluded the Fifth International Congress on Peer Review and Biomedical Publication. Rennie’s efforts have helped ignite a firestorm of research on the topic, rising from 1 or 2 scholarly papers annually in the 1980s to about 200 a year today.Upon accumulating a mountain of research, academics have been able to agree on 1 point: peer review is a tremendously difficult subject to study.“I’m left with a sort of paradox all the time,” Rennie said. “The more studies that are done, the more it’s proved difficult to prove that peer review is really beneficial, or that it makes much difference to the product. But as an editor, I know it does….Yet quantifying that is very difficult.”That’s because the ideal evaluation of peer review is not easily performed in the real world. Elizabeth Wager, a London-based publication consultant, and 2 colleagues proposed such a study 3 years ago in JAMA.For medicine, assuming the ultimate goal of research is improved health care, such a review would require a large-scale, long-term project in which studies were divided into 2 randomized groups; one would undergo peer review, the other an alternative method of assessment. A lengthy follow-up would be required to measure for health care improvements.Recognizing the near impossibility of such a study, Wager and her colleagues suggested a simpler task as a first step – identifying the objectives of peer review. Even this has proved challenging.“I don’t think we have gotten any further in defining the goals of peer review,” Wager said. “My suspicion is that these vary between journals; for example, some want to reject a lot, others want to help authors improve their reports—yet they tend to be lumped together.”The journal Nature, for example, publishes about 5 percent of papers submitted to its editors. Clearly, one of its goals for peer reviewers is to identify true breakthroughs in scientific fields. Other journals, which publish far higher percentages of submitted papers, are more concerned with weeding out mistakes and poor research.Despite the criticism of some academics, not all medical researchers perceive a problem with peer review. Some scientists generally believe the process helps editors sort through submissions and dump the bad science.“I just don’t hear a large outcry that the peer review system is totally broken,” said Dr. Stacey Berg, a pediatrician at Baylor College of Medicine and Texas Children’s Hospital. “By and large, most scientists I know believe if they design good experiments and get good reproducible data, they’ll get a fair hearing.”However, critics believe peer reviewers, pressed for time, may not properly understand the research, or may view the author as a competitor and comment harshly, or ask the author to perform an “impossible experiment.”Moreover, a common frustration among authors is the closed nature of peer review, neither knowing who their reviewers are, nor being able to directly respond to criticisms. Given the desire for scientists to freely communicate ideas and thoughts, such a system is antithetical to research, believes Wim D’Haeze, a bioengineer at The Scripps Research Institute.“In my opinion, the peer review system in its current state, applied by the gross of the scientific journals, is in many aspects unfair, undemocratic and old-fashioned,” D’Haeze, recently wrote in a perspective for the Science Advisory Board.The sins of peer reviewAlthough journal editors widely use peer review, they’re also quick to recognize its limitations. Former editor of the British Medical Journal, Dr. Richard Smith, jokes that an equally valid method of identifying publishable research would be to stand at the top of the stairs with a pile of submissions, and toss them down. Those that reach the bottom would be published. A jest, yes, but it reflects the angst with the process.Published researchers are quick to offer peer review horror stories. Dr. Virginia Moyer, a pediatrics professor at the University of Texas Medical School at Houston and an editorial board member of the journal Pediatrics, has submitted papers and received one review saying the research was elegant, and the second characterizing it as awful. Moyer recalls submitting a paper to JAMA several years ago, only to receive a speedy rejection. Shortly afterward, another journal within the JAMA family accepted the paper, and her research was later touted as a “must read” within a section of the main JAMA journal that highlights research in its other periodicals.“The real question is not whether peer review is broken, but rather, was it ever not broken?” Moyer said.It may be a long time in coming, but Annals of Emergency Medicine editor in chief Michael Callaham, MD, foresees a day when editors and reviewers will have formal training and will be held to standards in editing, language skills, journalism and science writing.“Compare it to clinical medicine, where you have to undergo formal testing and you have to reach a certain level of performance,” he said. “Most reviewers, editors, and journals do none of these things, so it is like practicing medicine back in the days before a license to do so was even required. Anyone can produce a journal and use any standards they see fit, including not having any standards at all or not revealing them.”It is difficult to quantify an improvement to peer review if its original value remains in question. That has not stopped some researchers from proposing changes or alternatives to the process.One suggestion has been closed, or blind, peer review. It could be called “double-blinded” because both authors and reviewers are unaware of each other’s identities. Advocates of closed peer review argue it would mitigate the problem of deference toward well-established scientists, as well as reduce the opportunity for a scientist to more sharply criticize the work of a known competitor.“We felt that if blinding in research was a good thing, it was probably also a good thing that reviewers not be influenced by their preconceptions of the author or institution’s quality,” Annals’ Callaham said.Opponents of closed review, JAMA’s Rennie among them, believe it only encourages gaming and guessing and doesn’t improve review quality. However, evidence suggests that when scientists guess at blinded identities they are usually spectacularly wrong.Another option is an entirely open system. Under such a process, the referees sign their reviews. Proponents believe open peer review fosters constructive rather than destructive criticism.Minding the status quoThe British Medical Journal has opted for open review. But an editorial in the November 2005 issue of Nature Cell Biology suggests many journals probably will not diverge from the status quo.“We have often considered signed reports, and indeed we will allow this if the referee so desires,” the editorial states. “However, in our experience this tends to select against incisive critique: too much is at stake.”As journal editors grapple with and tweak the current system of peer review, some scientists have advocated abandoning it entirely. At least one journal may well wish it hadn’t. Social Text, a post-modern cultural studies journal, hoped to attract more original, less conventional research by doing away with peer review. In what became known as the “Sokal Affair,” the journal was duped and publicly humiliated in 1996 by a New York University physicist. NYU professor Alan Sokal submitted a sham paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” which the journal readily published without peer review. Had the journal sent the manuscript to another physicist for review, Sokal’s “joke” might have been easily recognized.Dr. David Kaplan, an immunologist at Case Western Reserve University, proposes revamping the entire system, but not outright abandonment.“Peer review emphasizes competition, and discourages cooperation,” he said. “The problems with the system have such deep roots that what we really need is a different system.”Kaplan accepts that peer review seeks to serve more than 1 aim and proposes that it be split into 2 functions: first, reviewers attempt to improve manuscripts by offering constructive criticism; second, decisions are rendered on the significance of the findings.To this end Kaplan has suggested a new system of peer review: an author solicits reviews from colleagues, who identify revisions to improve the manuscript. The reviewers then write an evaluation of the significance of the revised work. Afterward the author submits a manuscript and the signed reviews to a journal’s editors, who would make a decision to publish based upon the evaluations. If accepted, the reviewers’ identities would be printed along with the research.Kaplan believes such a system would encourage authors to produce more complete, significant work, lest they be unable to obtain favorable reviews from their colleagues. There may be opportunities to manipulate this system, too, such as writing a favorable review in return for another.After he published these ideas, Kaplan was alerted to a similar effort just begun by BioMed Central, an independent publishing house that promotes open access to peer-reviewed research.BioMed’s newest experiment is Biology Direct, in which a study author chooses referees from a panel of reviewers pre-selected by the editors. Reviewers’ comments will then be published alongside the article. Kaplan has since joined the organization to help establish an immunology journal using these peer review criteria.Some ideas are still more radical. Cornell University physicist Paul Ginsparg has proposed putting preprints of scientific articles on the Internet, where they could be downloaded, vetted and tested by anyone in the scientific community before being passed on for formal publication.The irony of “democracy”If this sounds like anarchy, consider what King George III must have thought of democracy.Rennie, Moyer and other journal editors, for the most part, have not sought to put down any of these mini-revolutions. They believe the new experiments are healthy. To oppose new ideas and frown upon testing them, they say, would be wholly unscientific. And they expect there will be more attempts to modify peer review, especially in an era when open access and the Internet are challenging the hegemony of the well-established print journals.Democracy is actually an oft-used analogy for peer-review proponents, who like to recall Winston Churchill’s famous quote: “No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”No one pretends peer review is perfect, and, as it is currently practiced, it isn’t very democratic either. After 20 years of soul-searching by Rennie and other scientists are realizing they can’t even determine how imperfect peer review is. So it’s probably reasonable to expect many more congresses on peer review, each occurring every 4 years.Emergency medicine and peer review“The big unknown is still, how can we quantify the benefits of peer review? What exactly are they?” Annals’ Callaham said. “We all believe in them, but we can’t put numbers on them, probably because our research is too early in its infancy and the numbers we study are too small.”Peer review may turn out to be a vestigial organ, the appendix of the body scientific. That seems unlikely, though. The Internet is the ultimate example of a non-peer reviewed system, but only a fool would take for gospel information gleaned from a random Web site. (For example, http://www.alaska.net/∼clund/e_djublonskopf/Flatearthsociety.htm contends the world is flat).“We find that despite all this free knowledge from all sources, those with reputations who follow a certain process are consistently read and believed more than the others,” he said.Emergency medicine has the benefit of being a younger specialty, less trapped in the hierarchy that often slows change, allowing it to adapt as new evidence emerges, Callaham said.“Emergency medicine has done well in this area so far and will continue to do so, I think,” he said, “because it has been less authority driven than some older specialties. Emergency physicians want to see the evidence, thank you. The evidence will drive how we alter the process.” Annals of Emergency Medicine has been a leader in peer review research for the past 12 years. Editors Wears, Schriger, Cooper, Baxt, Weber, and Callaham, have all authored studies on the topic, and the journal has a special collection on its Web site (http://www.annemergmed.com/content/sciencepeer). Annals of Emergency Medicine has been going about the slow work of “fixing” peer review for over a decade, and it is one of the few journals that “grades” reviewers and publicly posts its detailed publication ethics policy.T.J. Milling, News Section Editor Albert Einstein’s revolutionary 1905 manuscript Annus Mirabilis wasn’t peer reviewed. Neither was Watson and Crick’s 1951 paper on DNA structure. Conversely, the work of Jan Hendrik Schön was peer reviewed and only later discovered to be a spectacular fraud. Schön, a former Bell Labs scientist, authored or co-authored one research paper every 8 days in 2001. An astonishing 15 of Schön’s papers were accepted for publication in Nature and Science, 2 of the most respected and influential journals in the scientific community. But after sufficient questioning by other physicists, Schön was proved a fake. Some scientists, and certainly the general public, found it disturbing that myriad peer reviewers did not catch the fabrications. But the reality is that peer review assumes a research article has been honestly written. It doesn’t catch outright fraud. Some critics say it also promotes favoritism, stifles creative new ideas and fails to promote cooperation among scientists. Furthermore, no scientist has proven that peer review actually works—that it improves the quality of published research. A palace on a pinSo here stands this institution which cannot lay claim to some of the biggest paradigm shifts in modern science and must admit to many more scandals than Schön’s. And yet its influence in the scientific community is profound. It is the very foundation of funding, publication and career promotion, but on close examination this bedrock has more than a few cracks.While the idea of peer review dates back several centuries, it became the standard for scientific publication only after World War II, when a groundswell of research began overwhelming journal editors. Today, no scientific work is taken seriously unless it has been vetted by a panel of experts in the field. To encourage candor from these volunteer referees, reviews have been blinded, meaning authors don’t know who the referees are. Despite its somewhat arbitrary nature and dependence on reputation and word of mouth, peer review has become the one and only path to scientific success, the sine qua non of an impressive curriculum vitae.Twenty years ago, Dr. Drummond Rennie, a deputy editor of the Journal of the American Medical Association, began asking why so little was known about such a critical, widely used tool. He decided to call for a congress of scientists to discuss the topic. A JAMA cartoon would later depict him as a Moses leading fellow scientists through the wilderness.“Pretty quickly it became clear a lot of people shared my anxieties about this topic,” Rennie said. “Anyone can chat about a problem, but talk is cheap. And at that time there was very little or no research on peer review.”That is no longer the case. This past September in Chicago, Rennie concluded the Fifth International Congress on Peer Review and Biomedical Publication. Rennie’s efforts have helped ignite a firestorm of research on the topic, rising from 1 or 2 scholarly papers annually in the 1980s to about 200 a year today.Upon accumulating a mountain of research, academics have been able to agree on 1 point: peer review is a tremendously difficult subject to study.“I’m left with a sort of paradox all the time,” Rennie said. “The more studies that are done, the more it’s proved difficult to prove that peer review is really beneficial, or that it makes much difference to the product. But as an editor, I know it does….Yet quantifying that is very difficult.”That’s because the ideal evaluation of peer review is not easily performed in the real world. Elizabeth Wager, a London-based publication consultant, and 2 colleagues proposed such a study 3 years ago in JAMA.For medicine, assuming the ultimate goal of research is improved health care, such a review would require a large-scale, long-term project in which studies were divided into 2 randomized groups; one would undergo peer review, the other an alternative method of assessment. A lengthy follow-up would be required to measure for health care improvements.Recognizing the near impossibility of such a study, Wager and her colleagues suggested a simpler task as a first step – identifying the objectives of peer review. Even this has proved challenging.“I don’t think we have gotten any further in defining the goals of peer review,” Wager said. “My suspicion is that these vary between journals; for example, some want to reject a lot, others want to help authors improve their reports—yet they tend to be lumped together.”The journal Nature, for example, publishes about 5 percent of papers submitted to its editors. Clearly, one of its goals for peer reviewers is to identify true breakthroughs in scientific fields. Other journals, which publish far higher percentages of submitted papers, are more concerned with weeding out mistakes and poor research.Despite the criticism of some academics, not all medical researchers perceive a problem with peer review. Some scientists generally believe the process helps editors sort through submissions and dump the bad science.“I just don’t hear a large outcry that the peer review system is totally broken,” said Dr. Stacey Berg, a pediatrician at Baylor College of Medicine and Texas Children’s Hospital. “By and large, most scientists I know believe if they design good experiments and get good reproducible data, they’ll get a fair hearing.”However, critics believe peer reviewers, pressed for time, may not properly understand the research, or may view the author as a competitor and comment harshly, or ask the author to perform an “impossible experiment.”Moreover, a common frustration among authors is the closed nature of peer review, neither knowing who their reviewers are, nor being able to directly respond to criticisms. Given the desire for scientists to freely communicate ideas and thoughts, such a system is antithetical to research, believes Wim D’Haeze, a bioengineer at The Scripps Research Institute.“In my opinion, the peer review system in its current state, applied by the gross of the scientific journals, is in many aspects unfair, undemocratic and old-fashioned,” D’Haeze, recently wrote in a perspective for the Science Advisory Board. So here stands this institution which cannot lay claim to some of the biggest paradigm shifts in modern science and must admit to many more scandals than Schön’s. And yet its influence in the scientific community is profound. It is the very foundation of funding, publication and career promotion, but on close examination this bedrock has more than a few cracks. While the idea of peer review dates back several centuries, it became the standard for scientific publication only after World War II, when a groundswell of research began overwhelming journal editors. Today, no scientific work is taken seriously unless it has been vetted by a panel of experts in the field. To encourage candor from these volunteer referees, reviews have been blinded, meaning authors don’t know who the referees are. Despite its somewhat arbitrary nature and dependence on reputation and word of mouth, peer review has become the one and only path to scientific success, the sine qua non of an impressive curriculum vitae. Twenty years ago, Dr. Drummond Rennie, a deputy editor of the Journal of the American Medical Association, began asking why so little was known about such a critical, widely used tool. He decided to call for a congress of scientists to discuss the topic. A JAMA cartoon would later depict him as a Moses leading fellow scientists through the wilderness. “Pretty quickly it became clear a lot of people shared my anxieties about this topic,” Rennie said. “Anyone can chat about a problem, but talk is cheap. And at that time there was very little or no research on peer review.” That is no longer the case. This past September in Chicago, Rennie concluded the Fifth International Congress on Peer Review and Biomedical Publication. Rennie’s efforts have helped ignite a firestorm of research on the topic, rising from 1 or 2 scholarly papers annually in the 1980s to about 200 a year today. Upon accumulating a mountain of research, academics have been able to agree on 1 point: peer review is a tremendously difficult subject to study. “I’m left with a sort of paradox all the time,” Rennie said. “The more studies that are done, the more it’s proved difficult to prove that peer review is really beneficial, or that it makes much difference to the product. But as an editor, I know it does….Yet quantifying that is very difficult.” That’s because the ideal evaluation of peer review is not easily performed in the real world. Elizabeth Wager, a London-based publication consultant, and 2 colleagues proposed such a study 3 years ago in JAMA. For medicine, assuming the ultimate goal of research is improved health care, such a review would require a large-scale, long-term project in which studies were divided into 2 randomized groups; one would undergo peer review, the other an alternative method of assessment. A lengthy follow-up would be required to measure for health care improvements. Recognizing the near impossibility of such a study, Wager and her colleagues suggested a simpler task as a first step – identifying the objectives of peer review. Even this has proved challenging. “I don’t think we have gotten any further in defining the goals of peer review,” Wager said. “My suspicion is that these vary between journals; for example, some want to reject a lot, others want to help authors improve their reports—yet they tend to be lumped together.” The journal Nature, for example, publishes about 5 percent of papers submitted to its editors. Clearly, one of its goals for peer reviewers is to identify true breakthroughs in scientific fields. Other journals, which publish far higher percentages of submitted papers, are more concerned with weeding out mistakes and poor research. Despite the criticism of some academics, not all medical researchers perceive a problem with peer review. Some scientists generally believe the process helps editors sort through submissions and dump the bad science. “I just don’t hear a large outcry that the peer review system is totally broken,” said Dr. Stacey Berg, a pediatrician at Baylor College of Medicine and Texas Children’s Hospital. “By and large, most scientists I know believe if they design good experiments and get good reproducible data, they’ll get a fair hearing.” However, critics believe peer reviewers, pressed for time, may not properly understand the research, or may view the author as a competitor and comment harshly, or ask the author to perform an “impossible experiment.” Moreover, a common frustration among authors is the closed nature of peer review, neither knowing who their reviewers are, nor being able to directly respond to criticisms. Given the desire for scientists to freely communicate ideas and thoughts, such a system is antithetical to research, believes Wim D’Haeze, a bioengineer at The Scripps Research Institute. “In my opinion, the peer review system in its current state, applied by the gross of the scientific journals, is in many aspects unfair, undemocratic and old-fashioned,” D’Haeze, recently wrote in a perspective for the Science Advisory Board. The sins of peer reviewAlthough journal editors widely use peer review, they’re also quick to recognize its limitations. Former editor of the British Medical Journal, Dr. Richard Smith, jokes that an equally valid method of identifying publishable research would be to stand at the top of the stairs with a pile of submissions, and toss them down. Those that reach the bottom would be published. A jest, yes, but it reflects the angst with the process.Published researchers are quick to offer peer review horror stories. Dr. Virginia Moyer, a pediatrics professor at the University of Texas Medical School at Houston and an editorial board member of the journal Pediatrics, has submitted papers and received one review saying the research was elegant, and the second characterizing it as awful. Moyer recalls submitting a paper to JAMA several years ago, only to receive a speedy rejection. Shortly afterward, another journal within the JAMA family accepted the paper, and her research was later touted as a “must read” within a section of the main JAMA journal that highlights research in its other periodicals.“The real question is not whether peer review is broken, but rather, was it ever not broken?” Moyer said.It may be a long time in coming, but Annals of Emergency Medicine editor in chief Michael Callaham, MD, foresees a day when editors and reviewers will have formal training and will be held to standards in editing, language skills, journalism and science writing.“Compare it to clinical medicine, where you have to undergo formal testing and you have to reach a certain level of performance,” he said. “Most reviewers, editors, and journals do none of these things, so it is like practicing medicine back in the days before a license to do so was even required. Anyone can produce a journal and use any standards they see fit, including not having any standards at all or not revealing them.”It is difficult to quantify an improvement to peer review if its original value remains in question. That has not stopped some researchers from proposing changes or alternatives to the process.One suggestion has been closed, or blind, peer review. It could be called “double-blinded” because both authors and reviewers are unaware of each other’s identities. Advocates of closed peer review argue it would mitigate the problem of deference toward well-established scientists, as well as reduce the opportunity for a scientist to more sharply criticize the work of a known competitor.“We felt that if blinding in research was a good thing, it was probably also a good thing that reviewers not be influenced by their preconceptions of the author or institution’s quality,” Annals’ Callaham said.Opponents of closed review, JAMA’s Rennie among them, believe it only encourages gaming and guessing and doesn’t improve review quality. However, evidence suggests that when scientists guess at blinded identities they are usually spectacularly wrong.Another option is an entirely open system. Under such a process, the referees sign their reviews. Proponents believe open peer review fosters constructive rather than destructive criticism. Although journal editors widely use peer review, they’re also quick to recognize its limitations. Former editor of the British Medical Journal, Dr. Richard Smith, jokes that an equally valid method of identifying publishable research would be to stand at the top of the stairs with a pile of submissions, and toss them down. Those that reach the bottom would be published. A jest, yes, but it reflects the angst with the process. Published researchers are quick to offer peer review horror stories. Dr. Virginia Moyer, a pediatrics professor at the University of Texas Medical School at Houston and an editorial board member of the journal Pediatrics, has submitted papers and received one review saying the research was elegant, and the second characterizing it as awful. Moyer recalls submitting a paper to JAMA several years ago, only to receive a speedy rejection. Shortly afterward, another journal within the JAMA family accepted the paper, and her research was later touted as a “must read” within a section of the main JAMA journal that highlights research in its other periodicals. “The real question is not whether peer review is broken, but rather, was it ever not broken?” Moyer said. It may be a long time in coming, but Annals of Emergency Medicine editor in chief Michael Callaham, MD, foresees a day when editors and reviewers will have formal training and will be held to standards in editing, language skills, journalism and science writing. “Compare it to clinical medicine, where you have to undergo formal testing and you have to reach a certain level of performance,” he said. “Most reviewers, editors, and journals do none of these things, so it is like practicing medicine back in the days before a license to do so was even required. Anyone can produce a journal and use any standards they see fit, including not having any standards at all or not revealing them.” It is difficult to quantify an improvement to peer review if its original value remains in question. That has not stopped some researchers from proposing changes or alternatives to the process. One suggestion has been closed, or blind, peer review. It could be called “double-blinded” because both authors and reviewers are unaware of each other’s identities. Advocates of closed peer review argue it would mitigate the problem of deference toward well-established scientists, as well as reduce the opportunity for a scientist to more sharply criticize the work of a known competitor. “We felt that if blinding in research was a good thing, it was probably also a good thing that reviewers not be influenced by their preconceptions of the author or institution’s quality,” Annals’ Callaham said. Opponents of closed review, JAMA’s Rennie among them, believe it only encourages gaming and guessing and doesn’t improve review quality. However, evidence suggests that when scientists guess at blinded identities they are usually spectacularly wrong. Another option is an entirely open system. Under such a process, the referees sign their reviews. Proponents believe open peer review fosters constructive rather than destructive criticism. Minding the status quoThe British Medical Journal has opted for open review. But an editorial in the November 2005 issue of Nature Cell Biology suggests many journals probably will not diverge from the status quo.“We have often considered signed reports, and indeed we will allow this if the referee so desires,” the editorial states. “However, in our experience this tends to select against incisive critique: too much is at stake.”As journal editors grapple with and tweak the current system of peer review, some scientists have advocated abandoning it entirely. At least one journal may well wish it hadn’t. Social Text, a post-modern cultural studies journal, hoped to attract more original, less conventional research by doing away with peer review. In what became known as the “Sokal Affair,” the journal was duped and publicly humiliated in 1996 by a New York University physicist. NYU professor Alan Sokal submitted a sham paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” which the journal readily published without peer review. Had the journal sent the manuscript to another physicist for review, Sokal’s “joke” might have been easily recognized.Dr. David Kaplan, an immunologist at Case Western Reserve University, proposes revamping the entire system, but not outright abandonment.“Peer review emphasizes competition, and discourages cooperation,” he said. “The problems with the system have such deep roots that what we really need is a different system.”Kaplan accepts that peer review seeks to serve more than 1 aim and proposes that it be split into 2 functions: first, reviewers attempt to improve manuscripts by offering constructive criticism; second, decisions are rendered on the significance of the findings.To this end Kaplan has suggested a new system of peer review: an author solicits reviews from colleagues, who identify revisions to improve the manuscript. The reviewers then write an evaluation of the significance of the revised work. Afterward the author submits a manuscript and the signed reviews to a journal’s editors, who would make a decision to publish based upon the evaluations. If accepted, the reviewers’ identities would be printed along with the research.Kaplan believes such a system would encourage authors to produce more complete, significant work, lest they be unable to obtain favorable reviews from their colleagues. There may be opportunities to manipulate this system, too, such as writing a favorable review in return for another.After he published these ideas, Kaplan was alerted to a similar effort just begun by BioMed Central, an independent publishing house that promotes open access to peer-reviewed research.BioMed’s newest experiment is Biology Direct, in which a study author chooses referees from a panel of reviewers pre-selected by the editors. Reviewers’ comments will then be published alongside the article. Kaplan has since joined the organization to help establish an immunology journal using these peer review criteria.Some ideas are still more radical. Cornell University physicist Paul Ginsparg has proposed putting preprints of scientific articles on the Internet, where they could be downloaded, vetted and tested by anyone in the scientific community before being passed on for formal publication. The British Medical Journal has opted for open review. But an editorial in the November 2005 issue of Nature Cell Biology suggests many journals probably will not diverge from the status quo. “We have often considered signed reports, and indeed we will allow this if the referee so desires,” the editorial states. “However, in our experience this tends to select against incisive critique: too much is at stake.” As journal editors grapple with and tweak the current system of peer review, some scientists have advocated abandoning it entirely. At least one journal may well wish it hadn’t. Social Text, a post-modern cultural studies journal, hoped to attract more original, less conventional research by doing away with peer review. In what became known as the “Sokal Affair,” the journal was duped and publicly humiliated in 1996 by a New York University physicist. NYU professor Alan Sokal submitted a sham paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” which the journal readily published without peer review. Had the journal sent the manuscript to another physicist for review, Sokal’s “joke” might have been easily recognized. Dr. David Kaplan, an immunologist at Case Western Reserve University, proposes revamping the entire system, but not outright abandonment. “Peer review emphasizes competition, and discourages cooperation,” he said. “The problems with the system have such deep roots that what we really need is a different system.” Kaplan accepts that peer review seeks to serve more than 1 aim and proposes that it be split into 2 functions: first, reviewers attempt to improve manuscripts by offering constructive criticism; second, decisions are rendered on the significance of the findings. To this end Kaplan has suggested a new system of peer review: an author solicits reviews from colleagues, who identify revisions to improve the manuscript. The reviewers then write an evaluation of the significance of the revised work. Afterward the author submits a manuscript and the signed reviews to a journal’s editors, who would make a decision to publish based upon the evaluations. If accepted, the reviewers’ identities would be printed along with the research. Kaplan believes such a system would encourage authors to produce more complete, significant work, lest they be unable to obtain favorable reviews from their colleagues. There may be opportunities to manipulate this system, too, such as writing a favorable review in return for another. After he published these ideas, Kaplan was alerted to a similar effort just begun by BioMed Central, an independent publishing house that promotes open access to peer-reviewed research. BioMed’s newest experiment is Biology Direct, in which a study author chooses referees from a panel of reviewers pre-selected by the editors. Reviewers’ comments will then be published alongside the article. Kaplan has since joined the organization to help establish an immunology journal using these peer review criteria. Some ideas are still more radical. Cornell University physicist Paul Ginsparg has proposed putting preprints of scientific articles on the Internet, where they could be downloaded, vetted and tested by anyone in the scientific community before being passed on for formal publication. The irony of “democracy”If this sounds like anarchy, consider what King George III must have thought of democracy.Rennie, Moyer and other journal editors, for the most part, have not sought to put down any of these mini-revolutions. They believe the new experiments are healthy. To oppose new ideas and frown upon testing them, they say, would be wholly unscientific. And they expect there will be more attempts to modify peer review, especially in an era when open access and the Internet are challenging the hegemony of the well-established print journals.Democracy is actually an oft-used analogy for peer-review proponents, who like to recall Winston Churchill’s famous quote: “No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”No one pretends peer review is perfect, and, as it is currently practiced, it isn’t very democratic either. After 20 years of soul-searching by Rennie and other scientists are realizing they can’t even determine how imperfect peer review is. So it’s probably reasonable to expect many more congresses on peer review, each occurring every 4 years. If this sounds like anarchy, consider what King George III must have thought of democracy. Rennie, Moyer and other journal editors, for the most part, have not sought to put down any of these mini-revolutions. They believe the new experiments are healthy. To oppose new ideas and frown upon testing them, they say, would be wholly unscientific. And they expect there will be more attempts to modify peer review, especially in an era when open access and the Internet are challenging the hegemony of the well-established print journals. Democracy is actually an oft-used analogy for peer-review proponents, who like to recall Winston Churchill’s famous quote: “No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.” No one pretends peer review is perfect, and, as it is currently practiced, it isn’t very democratic either. After 20 years of soul-searching by Rennie and other scientists are realizing they can’t even determine how imperfect peer review is. So it’s probably reasonable to expect many more congresses on peer review, each occurring every 4 years. Emergency medicine and peer review“The big unknown is still, how can we quantify the benefits of peer review? What exactly are they?” Annals’ Callaham said. “We all believe in them, but we can’t put numbers on them, probably because our research is too early in its infancy and the numbers we study are too small.”Peer review may turn out to be a vestigial organ, the appendix of the body scientific. That seems unlikely, though. The Internet is the ultimate example of a non-peer reviewed system, but only a fool would take for gospel information gleaned from a random Web site. (For example, http://www.alaska.net/∼clund/e_djublonskopf/Flatearthsociety.htm contends the world is flat).“We find that despite all this free knowledge from all sources, those with reputations who follow a certain process are consistently read and believed more than the others,” he said.Emergency medicine has the benefit of being a younger specialty, less trapped in the hierarchy that often slows change, allowing it to adapt as new evidence emerges, Callaham said.“Emergency medicine has done well in this area so far and will continue to do so, I think,” he said, “because it has been less authority driven than some older specialties. Emergency physicians want to see the evidence, thank you. The evidence will drive how we alter the process.” Annals of Emergency Medicine has been a leader in peer review research for the past 12 years. Editors Wears, Schriger, Cooper, Baxt, Weber, and Callaham, have all authored studies on the topic, and the journal has a special collection on its Web site (http://www.annemergmed.com/content/sciencepeer). Annals of Emergency Medicine has been going about the slow work of “fixing” peer review for over a decade, and it is one of the few journals that “grades” reviewers and publicly posts its detailed publication ethics policy.T.J. Milling, News Section Editor “The big unknown is still, how can we quantify the benefits of peer review? What exactly are they?” Annals’ Callaham said. “We all believe in them, but we can’t put numbers on them, probably because our research is too early in its infancy and the numbers we study are too small.” Peer review may turn out to be a vestigial organ, the appendix of the body scientific. That seems unlikely, though. The Internet is the ultimate example of a non-peer reviewed system, but only a fool would take for gospel information gleaned from a random Web site. (For example, http://www.alaska.net/∼clund/e_djublonskopf/Flatearthsociety.htm contends the world is flat). “We find that despite all this free knowledge from all sources, those with reputations who follow a certain process are consistently read and believed more than the others,” he said. Emergency medicine has the benefit of being a younger specialty, less trapped in the hierarchy that often slows change, allowing it to adapt as new evidence emerges, Callaham said. “Emergency medicine has done well in this area so far and will continue to do so, I think,” he said, “because it has been less authority driven than some older specialties. Emergency physicians want to see the evidence, thank you. The evidence will drive how we alter the process.” Annals of Emergency Medicine has been a leader in peer review research for the past 12 years. Editors Wears, Schriger, Cooper, Baxt, Weber, and Callaham, have all authored studies on the topic, and the journal has a special collection on its Web site (http://www.annemergmed.com/content/sciencepeer). Annals of Emergency Medicine has been going about the slow work of “fixing” peer review for over a decade, and it is one of the few journals that “grades” reviewers and publicly posts its detailed publication ethics policy.T.J. Milling, News Section Editor Annals of Emergency Medicine has been a leader in peer review research for the past 12 years. Editors Wears, Schriger, Cooper, Baxt, Weber, and Callaham, have all authored studies on the topic, and the journal has a special collection on its Web site (http://www.annemergmed.com/content/sciencepeer). Annals of Emergency Medicine has been going about the slow work of “fixing” peer review for over a decade, and it is one of the few journals that “grades” reviewers and publicly posts its detailed publication ethics policy.T.J. Milling, News Section Editor Annals of Emergency Medicine has been a leader in peer review research for the past 12 years. Editors Wears, Schriger, Cooper, Baxt, Weber, and Callaham, have all authored studies on the topic, and the journal has a special collection on its Web site (http://www.annemergmed.com/content/sciencepeer). Annals of Emergency Medicine has been going about the slow work of “fixing” peer review for over a decade, and it is one of the few journals that “grades” reviewers and publicly posts its detailed publication ethics policy. T.J. Milling, News Section Editor

Referência(s)