Year-End Musings from the Curator-in-Chief
2022; Elsevier BV; Volume: 129; Issue: 12 Linguagem: Inglês
10.1016/j.ophtha.2022.10.013
ISSN1549-4713
Autores Tópico(s)scientometrics and bibliometrics research
ResumoOne of the privileges of serving as editor-in-chief of Ophthalmology is the opportunity to share my thoughts on this page from time to time. As I finish my first calendar year in this post, I thought I would use this milestone to share a few rambling reflections with you regarding the state of the journal, and by proxy the state of the field and of medical literature more generally. My father was a zoologist and held the title Curator of Mammals at the American Museum of Natural History. Perhaps from familiarity, while I was growing up I never really thought about the word’s meaning. Per Merriam-Webster, the definition of curator is “one who has the care and superintendence of something; especially, one in charge of a museum, zoo, or other place of exhibit.” While likening our literature to a zoo may be overreach, enough similarities exist that I think the term is appropriate. As editor, I’ve found my curatorial duties to far exceed true editorial work, and I spend much more time working with our editorial board to select among the many submitted articles than helping to wordsmith the accepted content. This concept of curation of the literature deserves some discussion. Just as we, as ophthalmologists, have a social contract to use and maintain our knowledge and skills for the betterment of our patients’ lives, the editorial boards of our journals have a social contract to vet the submitted literature rigorously to ensure that the underlying science passes muster. The past several years have seen a dramatic erosion of trust in the public arena. Social media-driven propagation of misinformation and concerted campaigns of disinformation have created a crisis of mistrust that has permeated society. Although one would think rigorous science—the premise of which is that one can verify a conclusion by repeating an experiment—would be immune to this loss of confidence, unfortunately, it has not been spared. Scandals over the past few years involving data fabrication and falsification, hidden conflicts of interest, and the rise of “predatory journals” seeking to publish almost anything for a fee (see http://www.chm.bris.ac.uk/sillymolecules/birds.pdf for a humorous example from the Scientific Journal of Research and Reviews) have eroded what confidence may have rested with the medical journal establishment. Rigorous curation of the literature by fair brokers is an antidote to mistrust of science.Rigorous curation of the literature by fair brokers is an antidote to mistrust of science. Where does Ophthalmology stand today with respect to its curation? By several measures, we do well. First, we are fortunate to attract many outstanding contributions. We received more than 2000 submissions this year. Because we can publish about 10 articles per issue, this puts our acceptance rate at less than 10%. Indeed, we must decline many excellent studies, most of which find good homes in the medical literature, including our growing Ophthalmology family of journals: Ophthalmology Retina, Ophthalmology Glaucoma, and Ophthalmology Science. (Of note, the first two expect impact factors in 2023 and the last one is now indexed in PubMed Central and the National Library of Medicine, both signs of success for these outstanding journals.) Curation can be carried out more rigorously when there’s a bounty to select from. Second, our work carries impact. The true impact of our articles is best considered by the It’s A Wonderful Life test: over a period of time, how many more people would suffer more from eye disease or go blind if a manuscript were not published? Our (flawed) proxy for this is how many times our articles are cited by others. The impact factor of a journal reflects the average number of citations an article receives among indexed journals (e.g., those that can be found on PubMed) in its first 2 years. Ophthalmology’s impact factor of 14.2 leads clinical ophthalmology journals and suggests that the work published in our journal impacts much of the field. Third, our published works are accessible and being accessed. The printed Ophthalmology journal remains a benefit for the approximately 17 000 American Academy of Ophthalmology members in the U.S. Many of our articles, including the entire suite of articles on diversity, equity, and inclusion in the October issue, are available for free download to anyone in the world, and more than 2 million Ophthalmology articles have been downloaded over the past 5 years. Our outstanding social media editors have produced a great series of podcasts, discussing our articles with the authors. These, too, have been listened to nearly 70 000 times at the the time of writing. And for next year, we will start an online journal club to help our readers better understand some of our more challenging articles. Curation, impact, and dissemination will remain the cornerstones of our editorial process in 2023 and years ahead. Having celebrated some of our journal’s successes, perhaps I can close with a few year-end concerns about the submitted literature over the past year. Many of my generation will remember Festivus, the fictional year-end holiday celebrated by George Costanza’s family on the sitcom Seinfeld, which began with George’s father, Frank, yelling, “The tradition of Festivus begins with the airing of grievances. I got a lot of problems with you people! And now you’re gonna hear about it!” First, bigger isn’t always better. The era of big data is clearly upon us, and the availability of huge datasets—some with billions of data elements—has allowed many previously inaccessible questions to be answered with confidence. However, it has also led some investigators to attempt to re-answer previously well-answered questions. Huge datasets can provide infinitesimal P values, but these are of limited value if the existing literature already strongly supports a conclusion or if the magnitude of effects are small. Second, hypotheses should be explicit. Big data also has led to a lot of big data mining, in which huge datasets can be searched for correlations. This can devolve into “big data phrenology,” a twenty-first century version of the 19th-century pseudoscience purporting to ascribe specific personality characteristics to the pattern of contours on the skull. At best, such work is hypothesis-generating; at worst, it errantly correlates noise in large datasets with important outcomes. A good study always should identify the question being addressed, state the hypothesis explicitly, and show how the experimental design answers the question at hand. Third, many things are interesting, few are important. I have heard this quote attributed to the late cell biologist George Palade, but I cannot confirm the provenance. In any case, it rings true; authors spend a lot of time in teasing out subtle effects that—to be fair—are interesting, but rarely have impact. In reading an article I always ask myself, “What will I do differently now?” and “What do I know now?” in that order. An article that answers “nothing” to the first and “a little more” to the second has low priority for our journal, but such studies make up a large portion of the literature. Fourth, ideas are cheap. I see a lot of manuscripts confusing an interesting hypothesis with an important advance. We frequently see manuscripts that provide a modicum of data for a new, sometimes radical hypothesis. These manuscripts often can be identified by uses of the words novel or for the first time and similar in the abstract and conclusions. This phenomenon was on full display with COVID-19–related research, where the identification of a few cases of an eye condition with COVID-19 or vaccination led to a manuscript. (When several billion vaccines have been administered in 2 years, one-in-a-million things happen thousands of times by chance.) These manuscripts seek to stake a claim for the authors to take credit for an idea. To quote Isaac Asimov, “Ideas are cheap. It’s only what you do with them that counts.” An article providing solid evidence for a new hypothesis has much more value than one that leverages a few observations into a more general hypothesis. Fifth, mind one’s Ps. Much has been made in the past few years of “P hacking,” in which the number of studies with P < 0.05 is far more than 5 times the number of studies with P < 0.01. This suggests that we overrepresent studies straddling the arbitrary standard for significance. Although most authors now state explicit P values in their work, which is an improvement, many fail to apply appropriate multiple comparison corrections, leading to overstating the likelihood that results occurred by chance. A particular pet peeve are studies in which the P value is deemed more important than the magnitude of effect (where, for instance in the abstract, a statement like “medicine A lowered intraocular pressure more effectively than medicine B [P < 0.05]” appears without a magnitude). For instance, a 0.5-mmHg reduction in pressure, even if statistically significant, is unlikely to be clinically significant; hiding the magnitude of effect from the reader is poor form and relates to my next point. Sixth, advertisements can masquerade as manuscripts. This is an insidious and challenging problem for any article referencing a commercial product. The individuals developing the product frequently are authors on these articles and their potential conflict, although explicit, is not resolved easily. The same data can be presented many ways, and spinning a dataset to highlight a 1-sentence advertising blurb has become a common sport. That’s enough for now! On behalf of our family of journals, I thank you for your engagement (manifest by making it to the end of this editorial), and wish you and yours all the best for the new year.
Referência(s)