Editorial Acesso aberto Revisado por pares

A rose by any other name is still a rose: Assessing journal quality

2007; Elsevier BV; Volume: 55; Issue: 4 Linguagem: Inglês

10.1016/j.outlook.2007.06.001

ISSN

1528-3968

Autores

Marion E. Broome,

Tópico(s)

Innovations in Medical Education

Resumo

If we are to learn to improve the quality of the decisions we make, we need to accept the mysterious nature of our snap judgments.—Malcolm Gladwell, 2005 Discussions of journal quality have recently become more common for several reasons. One is that publishing is a big and costly business—especially in the health sciences. Library acquisition departments in universities and health care agencies often rely on faculty nominations (or protests) when faced with which journals to retain and which to let go. Appointment, promotion, and tenure committees are required to make decisions about dossiers submitted for evaluation by faculty members. And of course, each of us as individuals must make decisions about what personal subscriptions to continue (or not). Given the subjective nature of these evaluations, quality indictors that are numeric (ie, “objective”) are appealing. The number of journals has increased over the last decade and their diversity makes it an important challenge to make reasoned judgments about quality. In some cases, as mentioned above, judgments of journal quality can have implications for an individual’s academic career. And with the new and very serious emphasis on translation of “science” to practice,1Sussman S. Valente T. Rohrbach L. Skara S. Pentz M.A. Translation in the health sciences.Eval Health Sci. 2006; 29: 7-32Crossref PubMed Scopus (195) Google Scholar these assessments are not just an academic exercise, but one in which judgments about the value of the information to practitioners in a field also becomes a critical criterion for quality. Traditionally, many promotion and tenure committees have used the impact factor (IF) as their primary criterion of quality of a journal. I wrote about the advantages and limitations of the impact factor in a previous issue of Nursing Outlook.2Broome M. Ratings and rankings: Judging the evaluation of quality.Nurs Outlook. 2005; 53: 215-216Abstract Full Text Full Text PDF PubMed Scopus (9) Google Scholar Since that time several others have also reinforced the need to use additional criteria when assessing journal quality.3Milman V. Impact factor and how it relates to quality of journals.Notices of AMS. 2006; 53 (Available at: http://resources/wame-ethics-resources. Accessed June 2, 2007.): 351-352Google Scholar These include:•citation rates,4Meron R, Garfield E. Citation indexing: Its application to science, technology and the humanities. Institute for Scientific Information. Available at: http://scientific.thomson.com/. Accessed June 18, 2007.Google Scholar•the acceptance rate of the journal,•sponsorship by a professional society,5World Association of Medical Editors. WAME ethics resources. Available at: http://www.wame.org/resources/wame-ethics-resources. Accessed June 18, 2007.Google Scholar•the amount and type of advertising,5World Association of Medical Editors. WAME ethics resources. Available at: http://www.wame.org/resources/wame-ethics-resources. Accessed June 18, 2007.Google Scholar•relevance of papers to practitioners in the field,•reputation and prestige of members on the editorial board,•whether manuscripts published in the journal are refereed, and•the number of data-based manuscripts. Less commonly addressed criteria, but ones I think important, include (1) the degree of editorial control on the part of the editor and editorial board when a journal is the official journal of a professional society, and (2) special issues or supplements addressing specialized and usually cutting-edge topics in the field, usually authored by well-respected and knowledgeable individuals. Some of these criteria are much easier to measure than others. Yet even parameters usually thought to be highly quantitative and thus measurable often are not. For instance, I am commonly e-mailed by potential authors who want to publish and asked what the acceptance rate of Nursing Outlook is. I always have to ask them “which acceptance rate are you interested in?” The one that reflects manuscripts rejected by me before sending for review? These are usually rejected based on obviously poor scholarship, but sometimes they simply do not fit with the editorial purpose of the journal. Should an editor count the numerous e-mail inquiries from authors that one gently tells “no” and suggests another journal? Or does one only count those papers rejected after the first (or second) review by referees? There is not a clear consensus about this among nursing editors6Freda M.C. Kearney M.H. Ethical issues faced by nurse editors.Western J Nurs Res. 2005; 27: 487-499Crossref PubMed Scopus (28) Google Scholar or other editors. Relevance and significance to the field (or subspecialty) is another criterion difficult to measure with a summary metric, but it is a critical one. One could argue that this is easily captured in a citation analysis. Yet there is also the rare paper that captures everyone’s attention (every editor’s dream!), but which few will cite because they do not write about the same topic. Yet that paper will stay with them and influence their thinking for many years. And I would be willing to bet that if I ask 10 leaders in nursing today to name the top 5 manuscripts that influenced the way they “thought about” a critical issue in nursing, many of those manuscripts would have come out of the same three journals. The 5 papers on my list were sometimes in an entirely different area than my program of research, but provided me with tremendous insights that I would not have developed otherwise. These papers stimulated me to think about how I was “thinking” about what I was doing. In one case, the paper I read reported on a phenomenon that had nothing to do with pediatric pain management, yet it changed the approach I used to conceptualize mediating variables in the model I was testing. I also once read a paper that reported the qualitative findings of a study from individuals related to someone who was murdered and in which the investigator discussed the emotional challenges of analyzing such traumatic data. I then realized what I was actually asking of my graduate students who analyzed videotapes of children undergoing very painful procedures and changed how the team approached that activity—much to the relief of the graduate students. My point here is that these papers may or may not have high citation reports but they were both from a journal that published cutting-edge papers on topics that I found interesting and thought-provoking and which many, many times made me a better scholar. I could say the same thing about editorials and their role in judging the quality of the journal. As I began my academic career, there were two journals in the field that published editorials that were usually controversial, always thought-provoking, and, on occasion, took on many of the “scared cows” in the field. As an editor myself, who now knows first hand how much easier it is to write a data-based report than write a useful editorial, I salute those editors who shaped my thinking about many things: Donna Diers (IMAGE) and Florence Downs (Nursing Research). The ultimate challenge is to decide who judges which journals are viewed by the discipline as the highest-ranking journals, and for what purpose? The only way I know to get at this criterion is by consensus opinion. In most fields, with the exception of those with very narrow boundaries, APT committees rely on the impact factor. A more reasoned approach might be to consider seeking some consensus ratings among nurses in both practice and academe within specific specialty areas. This would provide insight from nurses at all levels in research, education, and practice who find the data/information in the journals useful. It is highly likely that the same journals high in the consensus ratings will also rank high on other more “measurable” criteria. This consensus building will also be an interesting exercise because many academics only read those journals directly related to their primary role (eg, research, practice, education). Several questions could be applied across all journals in a subfield that could generate some very interesting dialog (ever the optimist!), as well as be exceptionally useful to many, including APT committees, librarians, and practitioners. These questions could look something like:Does the knowledge disseminated within the journal:•build on and extend previous information in the field or just reiterate what is already known?•clearly identify gaps in the area and attempts to fill those gaps?•provide cutting-edge solutions for issues in the field? Although somewhat simplistic, these three questions move us beyond the qualitative–quantitative “real truth” arguments between whether a paper which is data based, a clinical case, issues-driven, or a description of a new methodology most important or relevant. These questions also focus us on building and reporting knowledge that will solve problems in the field related to patient or student outcomes, ethical dilemmas, or sociopolitical and methodological challenges the field is facing. They might even move us beyond our subspecialty and role silos that seem to so often limit our possibilities.

Referência(s)
Altmetric
PlumX