A Review of the Journal of Investigative Dermatology's Most Cited Publications over the Past 25 Years and the Use of Developing Bibliometric Methodologies to Assess Journal Quality
2012; Elsevier BV; Volume: 132; Issue: 3 Linguagem: Inglês
10.1038/jid.2011.391
ISSN1523-1747
AutoresDavid R. Bickers, Robert L. Modlin,
Tópico(s)Conferences and Exhibitions Management
ResumoThe JID is a major resource for publishing dermatologic research. Here we document bibliometric systems that permit detailed analysis of JID's relative scientific quality. We provide an overview of metrics employed by ISI Thomson Reuters Web of Knowledge and Elsevier's open-access Scopus to measure JID's comparative performance. We list JID's 50 most cited articles between 1986 and 2010 and summarize the six most cited papers published during this period. We conclude by showing how selected cited papers have influenced research in the JID subcategories of immunology/infection and photobiology during this period. JID has thrived as the strength of its editorial leadership and the quality of dermatologic science have grown apace. The JID is a major resource for publishing dermatologic research. Here we document bibliometric systems that permit detailed analysis of JID's relative scientific quality. We provide an overview of metrics employed by ISI Thomson Reuters Web of Knowledge and Elsevier's open-access Scopus to measure JID's comparative performance. We list JID's 50 most cited articles between 1986 and 2010 and summarize the six most cited papers published during this period. We conclude by showing how selected cited papers have influenced research in the JID subcategories of immunology/infection and photobiology during this period. JID has thrived as the strength of its editorial leadership and the quality of dermatologic science have grown apace. Anyone old enough to recall the weekly arrival of Current Contents and the painfully slow process of combing through each issue searching for relevant articles of interest, manually addressing postcards requesting reprints, or trudging to the library to copy a paper of interest will remember the name Eugene Garfield. He was responsible for Current Contents, a true pioneer in addressing the explosion of scientific information before we all had laptops and desktops. The introduction of PubMed in 1996 and Google Scholar in 2004 provided rapid online access to the literature. Dr. Garfield was also the founder of the Institute for Scientific Information (ISI). In 2004, the ISI was acquired by the science division of the Thomson Reuters Company. In this review we attempt to provide a glimpse of the newly developing bibliometric tools that have become available for assessing journal quality and to compare the position of JID among other leading dermatology journals using some of these tools. We then offer perspectives on JID's growth as an influential source of knowledge in the field of cutaneous biology over the past 25 years. The ISI Thomson Reuters Web of Knowledge provides quick, powerful access to the world's leading citation databases. It covers more than 10,000 of the highest-impact journals worldwide. In addition to Current Contents, Garfield created numerous innovative bibliographic products. Together with Irwin Sher, he first proposed the concept of impact factor by re-sorting the author citation index into the Journal Citation Index and, with the support of the National Institutes of Health, was thereafter able to create the Science Citation Index (SCI) (Garfield, 2006Garfield E. The history and meaning of the journal impact factor.JAMA. 2006; 295: 90-93Crossref PubMed Scopus (1708) Google Scholar. This led to the recognition that there was a core group of highly cited journals that would form the core of the SCI. Using the Web of Knowledge, we identified the 50 most cited articles published in JID between 1986 and 2010, listed the number of times each was cited during that time, and subcategorized the articles using the subcategories utilized by JID since 2002. These data are shown in Table 1 (see also Supplementary Table S1 online).Table 1The 50 most cited JID articles in the Institute for Scientific Information Thomson Reuters Web of Knowledge over the past 25 years and their subcategories Download .xls (.03 MB) Help with xls files Supplementary Table S1 The impact factor is defined as the ratio of the number of citations in the current year (numerator) to all articles and reviews published in the previous 2 years (denominator). Example of the calculation of the 2010 JID impact factor: Total citations in 2010 to articles published in 2008 (1,705) and 2009 (1,844) = 3,549; number of articles published in 2008–2009 = 566 JID impact factor = 3,549/566 = 6.270 There has been gradual improvement in JID's impact factor over the years, and between 2006 and 2010 it rose steadily from 4.535 to 6.270 (Figure 1). Its impact factor places the Journal first in a list of the top 20 dermatology journals ranked by the Web of Knowledge (Supplementary Table S2 online). Download .xls (.04 MB) Help with xls files Supplementary Table S2 The use of the 2-year window to calculate the impact factor has been criticized as being too short in that it does not represent a typical value to account for changes that could occur over a longer time span. This has led to the use of the 5-year impact factor calculated identically to the original 2-year impact factor but over 5 years. The 5-year journal impact factor is defined as the ratio of the number of citations in the current year (numerator) to all articles and reviews published in the previous 5 years (denominator). Example of the calculation of the 2010 JID 5-year impact factor: Total citations between 2005 and 2009 = 8,435; number of articles published between 2005 and 2009 = 1,465 JID 5-year impact factor = 8,435/1,465 = 5.758 The utility of the impact factor has been questioned as a tool for assessing the quality of scientific journals. For example, it has been pointed out that the SCI database includes only normal articles, notes, and reviews in the denominator as citable items but records citations to all types of documents (including editorials, letters, and meeting abstracts) in the numerator (Favaloro, 2008Favaloro E.J. Measuring the quality of journals and journal articles: the impact factor tells but a portion of the story. Semin Thromb Hemost.. 2008; 34: 7-25Google Scholar; Elsaie and Kammer, 2009Elsaie M.L. Kammer J. Impactitis: the impact factor myth syndrome.Indian J Dermatol. 2009; 54: 83-85Crossref PubMed Scopus (22) Google Scholar. As a result, journals that include meeting reports, editorials, and extensive correspondence sections could inflate that journal's impact factor relative to those that do not. Review articles may also help to increase the impact factor because of increased citations. Despite these limitations, it is generally agreed that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals (most of which predated the concept of impact factor) that tend to have higher impact factors (Hoeffel, 1998Hoeffel C. Journal impact factors.Allergy. 1998; 53: 1225Crossref PubMed Scopus (128) Google Scholar. Wolthoff et al., 2011Wolthoff A. Lee Y. Ghohestani R.F. Comprehensive citation factor: a novel method in ranking medical journals.Eur J Dermatol. 2011; 21: 495-500PubMed Google Scholar have attempted to address some of the limitations of impact factor rankings as they pertain to dermatology journals by proposing the use of the comprehensive citation factor (CCF) (Supplementary Table S3online). The CCF is based on data obtained in 2007 and includes in the denominator all citable articles, specifically editorials and letters. Their intent is to discourage the high proportion of editorials and letters to the editor that can artificially inflate a journal's impact factor. They address another potential shortcoming of impact factor, namely, the fact that the classification of journal articles by the Web of Knowledge is performed manually by multiple individuals, thereby raising questions about the accuracy and consistency of these designations. Rossner et al., 2008Rossner M. Van Epps H. Hill E. Irreproducible results: a response to Thomson Scientific.J Cell Biol. 2008; 180: 254-255Crossref PubMed Scopus (32) Google Scholar have also expressed concerns regarding the arbitrary manner in which the Web of Knowledge computes impact factors and interprets their databases. Download .pdf (.02 MB) Help with pdf files Supplementary Table S3 Another problem with the concept of impact factor is its use to evaluate the scholarly credentials of scientists rather than journals (Fersht, 2009Fersht A. The most influential journals: impact factor and Eigenfactor.Proc Natl Acad Sci USA. 2009; 106: 6883-6884Crossref PubMed Scopus (130) Google Scholar. Fersht suggests that this is an inappropriate use of impact factor and that assessment of academic merit requires careful and meticulous analysis by expert scholars in the subject area—the use of a simple metric for this purpose should never be a substitute for the evaluation of research quality. Despite these reservations, impact factor remains an objective measure of quality for the best journals in a specialty. The immediacy index of a journal is calculated by dividing the number of citations to articles published in a given year by the number of articles published in that year. It is an indicator of the speed with which citations to a specific journal appear in the published literature. Example of the calculation of the 2010 JID immediacy index: Citations to items published in 2010 = 412 Numbers of items published in 2010 = 250 JID immediacy index = 412/250 = 1.648 Because it is a per-article average, the immediacy index tends to discount the advantage of large journals over small ones. However, frequently issued journals may have an advantage because an article published early in the year has a better chance of being cited than one published later in the year. Many publications that publish infrequently or late in the year have low immediacy indexes. For comparing journals specializing in cuttingedge research, however, the immediacy index can provide a useful perspective. The cited half-life for a journal is the median age in years of its items cited in the current year. It is defined by the number of publication years from the current year that account for 50% of the citations received by the journal. Half of the total citations to the journal are to items published within the cited half-life. JID cited half-life = 7.9 years The citing half-life for a journal is the median age of the items the journal cited in the current year. Half of the citations in the journal are to items published within the citing half-life. JID citing half-life = 6.6 years Further efforts have been made to find additional metrics for measuring the quality of scientific journals (Rousseau, 2009Rousseau R. On the relation between the WoS impact factor, the Eigenfactor, the SCImago Journal Rank, the Article Influence Score and the journal h-index.http://hdl.handle.net/10760/13304Date: 2009Google Scholar. In the report, alternatives to the impact factor were compared to ascertain their value. These include the Eigenfactor score and the Article Influence score. It was shown that although these indicators are calculated using different methods and databases, they strongly correlate with the Web of Knowledge impact factor and with one another. The Eigenfactor score of a journal is an estimate of the percentage of the time that researchers actually spend with that particular journal. The Eigenfactor algorithm corresponds to a simple model of research in which readers follow chains of citations as they move from journal to journal. Imagine a researcher in a library selecting a journal article at random. After reading the article, the researcher randomly selects a citation from the article and proceeds to the cited journal, reads a random article there, and selects a citation in another journal volume. This process is then repeated over and over. The Eigenfactor score is the sum of normalized citations received from other journals weighted by the status of the citing journals. Citations are normalized with respect to the total amount of cited references of the citing journal. The citation target period is 5 years. JID Eigenfactor score = 0.05137 The Article Influence score is a measure of the average influence per article of each of its papers over the first 5 years after its publication. Article Influence scores are normalized so that the mean article in the entire ISI Thomson Journal Citation Reports (JCR) database has an article influence of 1.00. Thus, in 2010 JID had an Article Influence score of 1.800. This means that the average article in JID has 1.8 times the influence of the mean journal in the JCR. The data in Supplementary Table S2 online show how the 20 highest cited dermatology journals in the Web of Knowledge compare in terms of these various metrics. Given that JID has a reputation for publishing research articles and reviews focused on basic research and increasingly on translational application of that research, it is perhaps not surprising that it is ranked highest in virtually all of these bibliometric categories. Franceschet, 2010Franceschet M. Journal influence factors.J Informetr. 2010; 4: 239-248Crossref Scopus (44) Google Scholar compared 2-year impact factor, 5-year impact factor, Eigenfactor score, and Article Influence score as measures of journal quality. Article Influence and the 2-year impact factor were close to the 5-year impact factor as tools in this regard. Article influence was shown to be the most stable indicator across different scientific disciplines. Rizkallah and Sin, 2010Rizkallah J. Sin D.D. Integrative approach to quality assessment of medical journals using impact factor, Eigenfactor, and Article Influence scores.PLoS ONE. 2010; 5: 1-10Crossref Scopus (61) Google Scholar also used a combined approach to assess journal quality by comparing impact factor, Eigenfactor, and Article Influence scores in a series of highly cited journals between 2001 and 2008. Their analysis of impact factor and Eigenfactor score yielded a similar rank order of medical journals, although some discrepancies were apparent. For example, journals that publish large numbers of papers have higher Eigenfactor scores than would be expected for their impact factor, whereas the reverse is true for journals that publish fewer papers. The h-index was first proposed by Jorge Hirsch, a physicist at the University of California, San Diego (Hirsch, 2005Hirsch J.E. An index to quantify an individual's scientific research output.Proc Natl Acad Sci USA. 2005; 102: 16569-16572Crossref PubMed Scopus (6934) Google Scholar. It is defined as the highest number of published papers by a scientist receiving at least that number of citations. For example, someone with an h-index of 50 has written 50 papers, each of which has been cited at least 50 times. Hirsch believes that this is more objective than measures based on numbers of publications because a large number of mediocre publications would create a false impression of superior scholarship. Since its introduction, the h-index has become a widely accepted indicator of scientific performance and is included in major bibliographic databases, including the Web of Knowledge. It is said to have several advantages, including simplicity and the fact that citation impact and publication numbers are combined in a single number (Bornmann et al., 2011Bornmann L. Mutz R. Hug S.E. et al.A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants.J Informetr. 2011; 5: 346-359Crossref Scopus (221) Google Scholar. Loscalzo, 2011Loscalzo J. Can scientific quality be quantified? Circulation.. 2011; 123: 947-950Google Scholar, however, questions the utility of the h-index and emphasizes that it suffers from the same limitation associated with citation indexes and is not a surrogate for scientific quality. He adds that it seems unlikely that any substitute for the impact factor will be found in the near future because it has become such an embedded measure both academically and commercially. We initially attempted to provide h-index data for the authors of the top 50 cited papers in JID between 1986 and 2010 (Table 1). However, this analysis was complicated by a number of confounders, including duplicate names and initials, which in our opinion made confirmation of these scores uncertain, and we have therefore not included them here. The assignment and use of methods to more precisely identify authors should help to enhance the accuracy of author h-indexes in the future. Elsevier's Scopus, also known as SciVerse Scopus, is another citation database containing both peer-reviewed research literature and Web sources. This is an open-access portal that also attempts to address the quantity and the quality of scientific publications. It provides comprehensive coverage of the scientific, technical, medical, and social sciences fields as well as, recently, the arts and humanities. SCImago is their portal that includes the journals and country scientific indicators developed from information in the Scopus database (Elsevier B.V.). These indicators can also be used to assess and analyze scientific domains. This platform takes its name from the SCImago Journal Rank (SJR) indicator, which in turn is derived from Google's PageRank system. This indicator ranks journals in the Scopus database. The Scopus ranking for the top 25 dermatology journals is shown in Supplementary Table S4 online. In addition to SJR rank, Scopus has identified the number of citations per article over the prior 2 years as a meaningful indicator of journal quality. Using this indicator, JID, with a total of 6.24 citations per article, is the top-ranking dermatology journal. The Scopus and the Web of Knowledge rankings are quite similar (Supplementary Table S2). The Scopus ranking of selected dermatology journals relative to more than 18,750 other covered scientific journals is shown in Supplementary Table S5. JID ranks 309th of the more than 18,750 journals currently in the Scopus database. Download .pdf (.04 MB) Help with pdf files Supplementary Table S4 Download .pdf (.04 MB) Help with pdf files Supplementary Table S5 The vast majority of rankings of journals and rankings of scientists have been developed independently. Buoyssoua and Marchant, 2010Buoyssoua D. Marchant T. Consistent bibliometric rankings of authors and of journals.J Informetr. 2010; 4: 365-378Crossref Scopus (29) Google Scholar argue that a consistent approach to both rankings would be preferable because there is striking interdependence between the quality of a journal and the quality of the work done by the scientists who publish in that journal. They used the impact factor to assess journal quality and combined this with two rankings for scientists using either the total number of citations or the total number of citations weighted by the inverse of the number of coauthors. They concluded that these metrics provide a consistent assessment of journal and scientist quality. Evans, 2008Evans J.A. Electronic publication and the narrowing of science and scholarship.Science. 2008; 321: 395-398Crossref PubMed Scopus (245) Google Scholar addressed the issue of the rapid development of online access to journal articles and to citation frequency. He found that articles published more recently listed fewer and more recent citations, and he expressed concern that this trend may result in less comprehensive scholarly review. He emphasized that a major weakness of print library research is poor indexing of titles and authors in core journals, which resulted in the integration of science and scholarship. This has been disputed, and some believe that online access is actually having the opposite effect and encouraging more citations. PageRank is a link analysis algorithm named after one of the founders of Google, Larry Page. This proprietary system assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The goal as stated by Google is to permit more rapid searching of more sites more quickly, thereby providing more relevant results by applying a hierarchy of importance to results, allowing users to spend less time with irrelevant retrievals. Based on PageRank but distinct from it, Dellavalle et al., 2007Dellavalle R.P. Schilling L.M. Rodriguez M.A. et al.Refining dermatology journal impact factors using PageRank.J Am Acad Dermatol. 2007; 57: 116-119Abstract Full Text Full Text PDF PubMed Scopus (51) Google Scholar have developed a weighted algorithm for dermatology journals (Figure 2). This algorithm assigns greater weight to citations originating in more frequently cited journals. A high impact factor generally corresponds with a high PageRank weight (PRw). In some ways this resembles the Eigenfactor method described above in that it is a measure of the time spent reading a citation. The explosion in online access to journals has led to a decline in print subscriptions, along with a rise in electronic subscriptions. Lo and Fisher, 2011Lo E.H. Fisher M. Stroke; impact beyond the impact factor? Stroke.. 2011; 42: 1803-1804Google Scholar made this point in the journal Stroke. Their analysis showed that in 2010 Stroke had an 11.9% decline in combined individual and institutional print subscriptions compared with 2009, whereas electronic subscriptions increased. Electronic access to Stroke articles increased by 21.4% in 2010, and the number of articles read/downloaded on mobile devices such as cell phones and portable electronic devices increased dramatically as well. These trends strongly suggest that online access will be the dominant gateway to scientific articles in the future. At present, JID is different in that no electronic- only subscriptions are offered and, aside from institutions (libraries) and industry, almost all JID print subscriptions originate from society memberships (either the SID or the European Society for Dermatological Research). Google Scholar is a freely accessible, Web-based search engine that indexes the full text of scholarly literature across an array of publishing formats and disciplines. It includes most peerreviewed online journals of European and American publishers. It is similar in function to other freely available citation tools, including Scirus from Elsevier, CiteSeerX, and getCITED (Beel and Gipp, 2009Beel J. Gipp B. Google Scholar's ranking algorithm: an introductory overview. Proc 12th Int Conf Scientometrics Infometrics.. 2009; 1: 230-241Google Scholar. Google Scholar's statistical model is based on author names, bibliographic data, and article content to group articles probably written by the same author. Three metrics are available: the h-index; the i-10 index, which is the number of articles with at least 10 citations; and the total number of citations to articles. It is possible to enable automatic addition of newly published articles to one's profile. This would instruct the Google Scholar indexing system to update the author's profile as it discovers new articles. Authors can manually update profiles by adding missing articles, fixing bibliographic errors, and merging duplicate entries. Some have criticized the quality control of Google Scholar, and it is generally seen to be a browsing tool as opposed to more rigorous bibliometric tools such as the Web of Knowledge and Scopus. On the other hand, Google Scholar covers journals not included in the JCR, such as the Malaysian Journal of Medicine, whose contents achieved international recognition based on the citations and impact score it received in Google Scholar (Sanni and Zainab, 2010Sanni S.A. Zainab A.N. Google Scholar as a source for citation and impact analysis for a non-ISI indexed medical journal. Malays J Libr Info Sci.. 2010; 15: 35-51Google Scholar. In July 2011, Google began the launch of Google Scholar Citations, designed to provide a simple way for authors to compute citation metrics and track them over time. This feature is described at http://scholar.google.com/intl/en/scholar/citations.html. The service is currently limited to a small number of users, but interested individuals are directed to a page where they can register to be notified when the availability of Google Scholar Citations is expanded. In 1989 a special JID supplement was published to celebrate the 50th anniversary of the founding of the SID. David Norris, the JID editor at that time, chose for focused discussion six highly cited papers (he designated them "citation classics") that had been published in the Journal. The most highly cited paper was one by the late Albert Kligman, 1966Kligman A.M. The identification of contact allergens by human assay. III. The maximization test: a procedure for screening and rating contact sensitizers.J Invest Dermatol. 1966; 47: 393-406Abstract Full Text PDF PubMed Scopus (1070) Google Scholar in which he described an in vivo testing procedure that proved to be very useful in defining the risk of contact sensitization to chemicals in human populations. As pointed out by Norris, this highly predictive and reliable assay continues to be essential for the pharmaceutical and cosmetic industries. The continuing importance of the original Harvard-based cooperative clinical trial for psoralen UVA (PUVA) photochemotherapy is aptly demonstrated by the second citation classic of 1989 (Stern and Lange, 1988Stern R.S. Lange R. Non-melanoma skin cancer occurring in patients treated with PUVA five to ten years after first treatment.J Invest Dermatol. 1988; 91 (and members of the Photochemotherapy Follow-Up Study): 120-124Abstract Full Text PDF PubMed Google Scholar, which is also one of the most highly cited papers between 1986 and 2010 (number 19; Table 1). This is also a clear example of the important work in clinical research that has been published in JID throughout its history (see "Photobiology" below). The third citation classic of 1989 was a paper by Stanley Cohen describing the identification of an extract from murine submaxillary glands that could stimulate epidermal keratinization; the extract later became known as epidermal growth factor (Cohen and Elliott, 1963Cohen S. Elliott G.A. The stimulation of epidermal keratinization by a protein isolated from the submaxillary gland of the mouse.J Invest Dermatol. 1963; 40: 1-5Abstract Full Text PDF PubMed Scopus (216) Google Scholar. Cohen and Rita-Levi Montalcini shared the Nobel Prize in 1986 for this seminal work, which paved the way to our current understanding of the importance of growth factors in cutaneous biology. Norris, 1989Norris D.A. Six citation classics from The Journal of Investigative Dermatology.J Invest Dermatol. 1989; 92: 149s-150sAbstract Full Text PDF Google Scholar cited the article by Birbeck et al., 1961Birbeck M.S. Breathnach A.S. Everall J.D. An electron microscope study of basal melanocytes and high-level clear cells (Langerhans cells) in vitiligo.J Invest Dermatol. 1961; 37: 51-64Abstract Full Text PDF Google Scholar that described the characteristic cytoplasmic granules in epidermal Langerhans cells (LCs) now known as Birbeck granules. This paper, along with numerous others, presaged the growing recognition of the importance of these cells in cutaneous immunobiology. The fifth citation classic identified in 1989 was that by Karasek, 1966Karasek M.A. In vitro culture of human skin epithelial cells.J Invest Dermatol. 1966; 47: 533-540Crossref PubMed Scopus (68) Google Scholar, in which the collagen gel method for culturing human keratinocytes was described. This was one of many publications that contributed enormously to the development of epidermal cell biology. Finally, the sixth citation classic of 1989 focused on numerous review articles and their importance for JID. In particular, the paper by Beutner et al., 1968Beutner E.H. Jordon R.E. Chorzelski T.P. The immunopathology of pemphigus and bullous pemphigoid.J Invest Dermatol. 1968; 51: 63-80Crossref PubMed Scopus (287) Google Scholar was selected because of its indepth discussion of the development of immunofluorescence techniques that revolutionized the clinical management of patients with autoimmune blistering diseases. As stated by Norris, "This is clearly one of the best examples of basic research changing clinical practice." Indeed, this paper is a classic forerunner of current recognition of the importance of translational research. Borrowing David Norris's idea, we have selected six of the most highly cited papers published in JID over the past 25 years for brief discussion (see Table 1 for full citations for the 50 most cited articles). By far the most highly cited paper is that by Ades et al. (1992; 676 citations), which described the creation
Referência(s)