Editorial Acesso aberto Revisado por pares

Impact factors and prestige

2007; Elsevier BV; Volume: 71; Issue: 3 Linguagem: Inglês

10.1038/sj.ki.5002094

ISSN

1523-1755

Autores

Qais Al‐Awqati,

Tópico(s)

Meta-analysis and systematic reviews

Resumo

The science ministries of South Korea, China, and Pakistan are now offering cash rewards to their scientists if they publish papers in ‘high-impact’ journals such as Nature, Science, and Cell. The remuneration can be pretty impressive, as much as US$50,000 in China. In Pakistan scientists can receive between $1,000 and $20,000 on the basis of their annual cumulative impact factors. In the old days, one's reputation simply depended on the words of others, presumably those who read the papers — clearly a most subjective and unsatisfactory situation. But now that good money can be received for publications, we need a quantitative, hence objective and presumably superior criterion. But, sarcasm aside, the former method also involved an Old Boys' network whose members simply supported each other. Fifty years ago, Eugene Garfield, a pioneer and visionary, invented the impact factor1.Garfield E. Citation indexes for science: a new dimension in documentation through association of ideas.Science. 1955; 122: 108-111Crossref PubMed Scopus (1470) Google Scholar by equating the published literature to a network that is more quantifiable than that used by the Old Boys. This tool provided a much-needed ‘objective’ method of evaluation of the significance of published papers. There is now a large body of research that analyzes the importance of papers on the basis of this criterion and many new formulations that attempt to improve it. It is now so established that in many institutions the cumulative impact factor of a professor is the most important criterion for promotion. Let us examine what the impact factor is, what exactly it measures, and what it has done to scientific publication. Citation analysis was developed to protect against the uncritical citation of fraudulent data or even disputed data. Following the example of the legal citation index, which is critical in establishing precedents, Garfield tested the idea on one article (a celebrated one by Hans Selye on the general theory of adaptation2.Selye H. The general adaptation syndrome.J Clin Endocrinol. 6. 1946: 117-230Google Scholar) and found that in one endocrinology journal, 23 papers referred to this paper in one year; when he read these papers he discovered that they were remarkably diverse, suggesting that the impact of Selye's ideas was quite large. This idea was then formalized by others3.Garner R. A Computer-Oriented Graph Theoretic Analysis of Citation Index Structures. Drexel University Press, Philadelphia1967http://www.garfield.library.upenn.edu/rgarner.pdfGoogle Scholar using the precepts of graph theory (the origin of network analysis) with a machine-searchable method. The journal impact factor is a ratio of two measurements: the numerator is the number of citations in the current year of any article published in the journal in the previous two years, and the denominator is the number of articles published in the same two-year period. Of course, fields differ in citation practices and the number of references; for instance, mathematics papers have far fewer references than papers in the biological sciences. Further, the half-life of citations is quite different; for instance, physiology articles continue to be cited long after molecular biology papers have disappeared from reference lists. But, importantly, it seems that impact factors are not affected by the size of a field, because a large field with many papers, and hence many citations, is also one where the denominator is larger. The impact factor is here to stay and has even expanded its domain to the analysis of impact factors of individual scientists regardless of where they publish. It may be an imperfect criterion, but it is a reasonable one, as it is based on the common-sense idea that when you write a paper and refer to another one, the ideas expressed in the cited paper probably had some influence over what you wrote. Hence, it had some impact! All editors of journals eye the impact factor as a vindication of their policies, and we, the new editorial team here at Kidney International, will only receive our report card for the first time in the summer of 2008, when the two years in the denominator will all have been supervised by our group. Many surprising results were seen when citation analyses were examined. The most cited article, I think, remains the protein-measurement paper by Lowry in the Journal of Biological Chemistry.4.Lowry O.H. Rosebrough N.J. Farr A.L. Randall R.J. Protein measurement with the folin phenol reagent.J Biol Chem. 1951; 193: 265-275Abstract Full Text PDF PubMed Google Scholar Other methods papers are also highly cited. It was also found that a small number of journals account for a great number of citations, and even among the most highly cited journals (Nature, for instance) 25% of articles provide 90% of the citations. The most troubling issue is that the most cited articles are always reviews, and review journals are the ones with the highest impact factor. Of the top ten journals with the highest impact, the top four are review journals (Table 1). Referring to a review that presents an original new hypothesis might be important, but I suspect it is rare. Most citations of reviews, I believe, are due to the laziness of writers who do not know and do not want to look up who did what first. This introduces a serious problem in the impact factor analysis. Further, even in the most prestigious journals there are a significant number of papers that are never cited! So how do we evaluate the importance of scientists who publish in high-impact journals but whose work goes uncited? Hirsch developed a new metric that he terms the h index, which aims to evaluate the impact of individual scientists.5.Hirsch J.E. An index to quantify an individual's scientific research output.Proc Natl Acad Sci USA. 2005; 102: 16569-16572Crossref PubMed Scopus (5913) Google Scholar This is the highest number of papers that a scientist has written that have each received at least that number of citations; an h index of 50, for example, means someone has written 50 papers that have each had at least 50 citations. It is field dependent; the top ten physicists have indexes of about 70, whereas the top ten biologists have in excess of 120.Table 1The highest-ranking journals of 2003Impact factorPageRankY-factorRankValueJournalValueJournalValueJournal152.3Annu Rev Immunol16.8Nature52Nature237.6Annu Rev Biochem16.4J Biol Chem48.8Science336.8Physiol Rev16.4Science19.8N Engl J Med435.0Nat Rev Mol Cell Biol14.5Proc Natl Acad Sci USA15.3Cell534.8N Engl J Med8.4Phys Rev Lett14.9Proc Natl Acad Sci USA631Nature5.8Cell10.6J Biol Chem730.6Nat Med5.7N Engl J Med8.5JAMA829.8Science4.7J Am Chem Soc7.8Lancet928.2Nat Immunol4.5J Immunol7.6Nat Genet1028.2Rev Mod Phys4.3Appl Phys Lett6.5Nat MedPageRank values are × 103, and Y-factor values are × 102. Open table in a new tab PageRank values are × 103, and Y-factor values are × 102. In a very interesting recent paper, Bollen et al.6Bollen J, Rodriguez MA, Van de Sompel H. Journal status. Scientometrics 69. Available at: http://arxiv.org/abs/cs.DL/0601030Google Scholar questioned the meaning of impact factors, stating that they actually represent popularity rather than prestige. The authors began with the intuitive idea that the status of an actor is determined by the total number of endorsements he or she gets from his or her peers but also the number of endorsements from prestigious actors. Who cares, you might ask, if one was endorsed by thousands of nonentities when the endorsement of a few Nobel laureates is what we should be after? But how does one actually quantitate prestige? Here Bollen et al.6Bollen J, Rodriguez MA, Van de Sompel H. Journal status. Scientometrics 69. Available at: http://arxiv.org/abs/cs.DL/0601030Google Scholar used something that we all have become familiar with but whose origin we likely do not know: the Google PageRank. Page and Brin, the inventors of Google, developed an algorithm to allow them to rank web pages. It assigns a numerical value to each element in a hyperlinked network of web documents in order to assign them an importance in the set of pages. This is the basis of the ranking when you type something into the Google search engine and get a list that has been ranked by importance. Google describes this as follows: “PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves ‘important’ weigh more heavily and help to make other pages ‘important’” (http://www.google.com/technology/). In other words, it is not enough that your work is cited by many people; for you to have real impact, the people who cite your work must be heavily cited themselves. If a web page (science paper) is linked (cited) by others, it is assumed that each page transfers to the other page a proportion of its prestige. If a page has ten outlinks, then each recipient page acquires one-tenth of the value of that page. This is the meaning of the “democratic” nature of the web. However, Bollen et al.6Bollen J, Rodriguez MA, Van de Sompel H. Journal status. Scientometrics 69. Available at: http://arxiv.org/abs/cs.DL/0601030Google Scholar introduced a new parameter that takes into consideration that if journal A is cited ten times more frequently in journal B than any other journal, then it should transfer to journal B ten times more prestige. In fact this analysis had already been applied to the Google PageRank by other scientists to allow better annotation of pages in the World Wide Web. But Bollen et al.6Bollen J, Rodriguez MA, Van de Sompel H. Journal status. Scientometrics 69. Available at: http://arxiv.org/abs/cs.DL/0601030Google Scholar noted that the impact factor, which clearly measures popularity is not without value, since it is your peers who are citing your papers. Hence, they invented a new parameter called the Y-factor in which they multiplied the PageRank factor by the impact factor. Using these weighting methods, they reanalyzed journal status for 2003 and found the ranking of the top journals that is shown in Table 1. Now the ranking really makes sense. It is what we all think of when we think of prestigious journals. Surprisingly, the same analysis done for journals in the field of medicine (Table 2) did not produce an equally large and significant difference.Table 2The highest-ranking journals in medicine in 2003Impact FactorPageRankY-factorRankValueJournalValueJournalValueJournal134.8N Engl J Med5.7N Engl J Med19.8N Engl J Med230.6Nat Med4.2Lancet8.5JAMA321.5JAMA4.0JAMA7.8Lancet418.3Lancet2.3J Clin Invest6.5Nat Med515.3J Exp Med2.2J Exp Med3.4J Exp Med614.3J Clin Invest1.4Am J Respir Crit Care Med3.2J Clin Invest712.4Ann Intern Med1.2Ann Intern Med1.4Ann Intern Med811.4Annu Rev Med0.9Neuroimage1.2Am J Respir Crit Care Med98.9Am J Respir Crit Care Med0.9Arch Intern Med0.6Arch Intern Med106.8Arch Intern Med0.6NeuroimagePageRank values are × 103, and Y-factor values are × 102. Open table in a new tab PageRank values are × 103, and Y-factor values are × 102. These studies raise an important issue. First, we have to start from the gold standard of evaluation, which begins and ends with knowledgeable readers who decide on the importance of a paper after reading it. Decisions about its validity clearly will await confirmation of its results, and presumably that is when the citation begins. The use of these factors by bureaucracies may be excusable, since they themselves do not have the expertise to make informed judgments. In this capacity the factors serve a very useful role. But I think they should be adjuncts to, not substitutes for, peer evaluation. Promotions at all the institutions I have worked at have depended largely on the words of a large body of external scientists working in the field of expertise of the candidate. I am sure that this will remain the norm, despite the fear of the Old Boys' network.

Referência(s)
Altmetric
PlumX