Editorial Revisado por pares

What's in a number?

2011; American Physiological Society; Volume: 111; Issue: 4 Linguagem: Inglês

10.1152/japplphysiol.00935.2011

ISSN

8750-7587

Autores

Peter D. Wagner,

Tópico(s)

Academic Publishing and Open Access

Resumo

EditorialWhat's in a number?Peter D. Wagner, MDPeter D. WagnerDepartment of Medicine, University of California, San Diego, La Jolla, California, MDPublished Online:01 Oct 2011https://doi.org/10.1152/japplphysiol.00935.2011This is the final version - click for previous versionMoreSectionsPDF (683 KB)Download PDF ToolsExport citationAdd to favoritesGet permissionsTrack citations ShareShare onFacebookTwitterLinkedInWeChat the scientific publishing world is being influenced by the Impact Factor (IF) just as the wine industry has been, and continues to be, influenced by a certain Robert Parker. There is little doubt that, just as Parker's personal taste in wine has caused winemakers in droves to change their procedures and wine styles, the IF has driven at least some journals to considerably alter their publications practices to raise IF. The tail is wagging the dog big time, and this simply has to stop.Academic institutions and funding agencies must be made to see that IF is NOT the only parameter of journal influence. It is not even a good one (see below). These decision-making bodies are significantly affecting authors' lives based on a single, poor parameter. They demand good science of their employees/fundees, should they not be held to similar standards in their assessment of the journals in which their employees/fundees publish? At the very least, they should take a balanced look at other available citation statistics.But first, why is IF poor? Dempsey et al. (1) some time ago pointed out that IF reflects a journal's, not an author's, influence and, moreover, that only a minority of articles in a journal significantly contribute to the overall IF (3). There is great intra-journal variance in how often its papers are cited, making an average number such as IF truly flawed (2). There is absolutely no justification for using journal IF as a parameter of individual author influence.But what about IF as a barometer of a journal's influence? As mentioned, the IF is but one citation parameter. It reflects the average number of citations per article in the journal in specified time windows. The ISI compiles several other parameters of journal influence, and all are grounded in numbers of citations in each journal (begging the deeper question, for another day, of whether citation statistics of any kind should be the sole barometer of scientific publications). It is well worth looking at these alternative statistics because we do not get to hear about them often, thanks to domination by the IF. These alternative statistics tell a very different story.The 2010 journal citations report is out (http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/) (4) and we at APS are of course interested in our journals and those of our competitors. So we looked at the available citation statistics of some 77 journals listed by ISI as “Physiology Journals.” Figure 1 plots IF in a color-coded manner. The four journals with highest IF (dark blue) are all journals that contain only reviews and no primary research. Two of these are APS journals. The next three journals (light blue) are specialty niche journals. Those in green are the APS research journals, except for Journal of Applied Physiology, which is in red (this is a Journal of Applied Physiology Editorial). All the others are in gray, with three standouts identified by name. By this criterion, APS research journals span the 12th to 36th percentiles. The 5 year impact factor plot is almost identical to the 2 year plot in Fig. 1 (r2 = 0.95) and is thus not shown.Fig. 1.2010 Impact Factor for each of the 77 journals identified in Journal Citation Report 2010 (4) as physiological, arranged by rank order and color coded as indicated.Download figureDownload PowerPointNow look at Fig. 2. This takes the very same journals but instead of IF uses another citation parameter—total number of citations in the same reporting period, 2010. The color coding is the same as for Fig. 1. Hats off to the Journal of Physiology, sitting at the top of the class, but the important observation is to see to where the four ultra-high IF review-only journals have migrated. Compared with the leaders in this portrayal of impact, only Physiological Reviews has a strong showing. The other three review journals that shine in Fig. 1 are far from the top. Journal of Physiology and three APS research journals are way out in front, and the remaining APS research journals are also highly ranked. Think about what Fig. 2 is saying about the primary research journals: In their entirety, their overall impact is huge because they publish a lot of cited work. In absolute terms, there are substantially more citations to papers in any one of the top four research journals than to papers in Physiological Reviews. So IF gives one perspective; total citations a completely different outcome. Which is the better barometer of journal impact? You be the judge.Fig. 2.Rank-ordered 2010 total number of citations to articles of any age in each of the same 77 journals, color coded as for Fig. 1.Download figureDownload PowerPointFigure 3 displays yet another citation parameter, the citation half-life—and the rankings switch around again. The highest value ISI gives for this parameter is “>10,” which explains why the top 12 entries appear identical. Here all color-coded clusters—research, niche, and review—are spread-eagled across the range, and no clear picture emerges for any of these groups. Three parameters, three different outcomes.Fig. 3.Rank-ordered 2010 citation half-life for articles in each of the same 77 journals, color coded as for Fig. 1.Download figureDownload PowerPointFigure 4 shows the Eigenfactor score, which at first looks somewhat like Fig. 2, total citations—especially for the research journals. However, closer inspection reveals that the review-only journals now fare much better than by total citations. Four for four (i.e., four different parameters, four different outcomes)!Fig. 4.Rank-ordered 2010 Eigenfactor score for articles in each of the same 77 journals, color coded as for Fig. 1.Download figureDownload PowerPointFigure 5, the article influence score, comes back closer to the standard IF rankings except for the three light blue niche journals that did well with IF but mostly not so well using the article influence score parameter. Five for five.Fig. 5.Rank-ordered 2010 article influence score for articles in each of the same 77 journals, color coded as for Fig. 1.Download figureDownload PowerPointFigure 6 shows the remaining parameter, the immediacy index, which distributes the rankings yet again in a different manner. Dare one claim six for six?Fig. 6.Rank-ordered 2010 immediacy index for articles in each of the same 77 journals, color coded as for Fig. 1.Download figureDownload PowerPointWhat is the message from all of this? Hopefully, it is obvious. It is high time to put IF in its place as just one of many citation-based parameters of journal influence, each of which tells a different story. Decision-making bodies must accept—and act on—this irrefutable fact. They need to ask whether any citation parameter, or even group of parameters, should be the final arbiter of journal excellence. If they decide the answer is yes, good science demands that they weigh and respect the many different ways citations can be quantified. In this author's opinion, each parameter adds value, and in the end, if citations must rule, there needs to be integration of the several ways of depicting impact to quantify a journal's influence. But it is high time for decision-making bodies dictating faculty advancement and research funding to give up on IF as the sole yardstick of impact.DISCLOSURESNo conflicts of interest, financial or otherwise, are declared by the authors.REFERENCES1. Dempsey J. Impact factor and its role in academic promotion: A statement adopted by the International Respiratory Journal Editors Roundtable. J Appl Physiol 107: 1005, 2009.Link | ISI | Google Scholar2. Frank M. Impact factors: arbiter of excellence? J Med Libr Assoc 91: 4–6, 2003.ISI | Google Scholar3. Seglen PO. Why the impact factor of journals should not be used for evaluating research. Br Med J 314: 498–502, 1997.Crossref | Google Scholar4. Thomson Reuters. Journal Citation Report 2010 (Online). Thomson Reuters. Philadelphia, PA, 2010; http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/. (August 3, 2011).Google ScholarAUTHOR NOTESAddress for reprint requests and other correspondence: P. D. Wagner, Dept. of Medicine, Univ. of California, San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (e-mail: [email protected]edu). Download PDF Back to Top Next FiguresReferencesRelatedInformationCited ByEvaluation of scientific impact: insights and incitesJason H. T. Bates and Peter D. Wagner1 February 2015 | Journal of Applied Physiology, Vol. 118, No. 3Effects of aging, and other bad behaviorsPeter D. Wagner1 May 2012 | Journal of Applied Physiology, Vol. 112, No. 9Two-year citations of JAPPL original articles: evidence of a relative age effectClaudio Gil Soares de Araújo, Bruno Ribeiro Ramalho de Oliveira, Letícia Vargas de Oliveira Brito, Thiago Torres da Matta, Bruno Ferreira Viana, Cintia Pereira de Souza, Renato de Carvalho Guerreiro, Fabian Antonio Slama, and Eduardo da Matta Mello Portugal1 May 2012 | Journal of Applied Physiology, Vol. 112, No. 9 More from this issue > Volume 111Issue 4October 2011Pages 951-953 Copyright & PermissionsCopyright © 2011 the American Physiological Societyhttps://doi.org/10.1152/japplphysiol.00935.2011PubMed21799127History Published online 1 October 2011 Published in print 1 October 2011 Metrics

Referência(s)