Rise and Fall of the Thomson Impact Factor
2008; Lippincott Williams & Wilkins; Volume: 19; Issue: 3 Linguagem: Inglês
10.1097/ede.0b013e31816a1293
ISSN1531-5487
Autores Tópico(s)scientometrics and bibliometrics research
ResumoHow do we judge a journal's success? The publisher's criterion is simple enough—a journal has to make money. But editors, authors, and readers have a more elusive goal. We want our journals to publish interesting and important papers that advance the field. How can we tell if a journal is succeeding? The impact factor seemed at first to be a step in the right direction. Here was a measure of the extent to which a journal's papers contribute enough to be mentioned by others. This measure had a simple basis (we thought), in providing the average number of times a journal's papers are cited over a period of time. This had some intuitive appeal. There were obvious limitations even at the outset: mere citation doesn't mean that a paper is important—or even good. More subtle problems gradually emerged. The impact factor is subject to manipulation, to the extent of distorting the editorial process. An editor who holds 2 equally good epidemiology papers, say on breast cancer and on liver disease, could be swayed by knowing there are hundreds of breast cancer epidemiologists out there ready to cite a breast cancer paper, but only a few who care about liver disease. This is hardly fair to authors who pioneer a new area. As with so many other things in life, the advantage seems to go to the strong. Such limitations of the impact factor are no secret. They have been widely discussed1,2 and the system remains widely tolerated nonetheless. But lately, events have taken an unexpected turn. What started as an index for evaluating a journal has now morphed into an index for evaluating the papers that are published in the journal—and even for evaluating the authors who write the papers that are published in the journal. It has become widespread practice for academic institutions to base monetary awards on the Thomson impact factor of the journals in which their researchers publish. Apparently the thinking is, “even if your paper is useless, publish it in a journal with a good impact factor and we will forgive you.” Some examples: In Germany, universities distribute money to researchers by a formula that includes the Thomson impact factor. Each point of impact factor is worth about 1000 Euros. (Stephan Mertens, personal communication). In Pakistan, researchers receive bonuses of up to US$20,000 a year depending on the sum of the impact factors of the journals in which they publish.3 Half is for researchers' personal use.3 In Finland, a portion of hospital funding from the government depends on the impact factor of journals in which the hospital researchers publish. An increase in one point of impact factor for one paper can increase a hospital's funding by US$7000.4 As these uses (and abuses) of the Thomson impact factor spread, now we find out that the impact factor doesn't even mean what we thought it did. The commentary5 by Miguel Hernán in this issue of Epidemiology demonstrates the degree to which the impact factor is biased by an arbitrary rule of bookkeeping—a bias that in just one small sample of epidemiology journals changes the impact factor by up to 30%. Thomson Scientific (the owner, calculator, and aggressive marketer of the impact factor) is unapologetic about such problems. The company says that if we have been misinterpreting the impact factor, then we just haven't been paying attention.6 Maybe so. Another concern is that Thomson's methods for calculating the impact factor are neither transparent nor reproducible.5,7 Where does all this leave us? Our institutions are evaluating our scientific work with a single indicator of obscure construction, subject to manipulation, and meaning something different than we thought. We have a problem. It should go without saying (but apparently needs to be said) that no single number can capture the value of scientific work. At the very least, we need lots of numbers. In an age of hyperabundant data, this should not be difficult—and in fact, it's not. There are many facets of journals (and papers, and authors) that can be quantified. Do you want to know how many times one of your scientific papers has been cited? “Google Scholar”8 will tell you in a fraction of a second (and for free). Or perhaps you're curious about how journals compare in measures of productivity and prestige? “SCImago”9 is an ambitious attempt to quantify these aspects—again for free and with structural advantages over the Thomson impact factor. To an extent that no one could have anticipated, the academic world has come to place enormous weight on a single measure that is calculated privately by a corporation with no accountability, a measure that was never meant to carry such a load. Yes, some of us benefit from this flawed system—in addition to other rewards that come from publishing in high-impact journals, we collect nice cash bonuses. But none of this changes the fact that evaluating research by a single number is embarrassing reductionism, as if we were talking about figure skating rather than science. Our university and hospital administrators and our granting agencies apparently haven't gotten this message. As Hernán points out, there's no one better qualified to tell them than us.
Referência(s)