Carta Acesso aberto Revisado por pares

Does Everything Need to Be “Scientific”?

2016; Elsevier BV; Volume: 68; Issue: 6 Linguagem: Inglês

10.1016/j.annemergmed.2016.06.043

ISSN

1097-6760

Autores

David L. Schriger,

Tópico(s)

Empathy and Medical Education

Resumo

SEE RELATED ARTICLE, P. 729.You see control can never be a means to any practical end.… Control can never be a means to anything but more control.—William S. Burroughs, Naked LunchThe imperfect is our paradise.—Wallace Stevens, “The Poems of our Climate” Peer review of the primary medical literature has been taking place for more than 200 years,1Fyfe A. Peer review: not as old as you might think. 2015. Available at: https://www.timeshighereducation.com/features/peer-review-not-old-you-might-think. Accessed June 1, 2016.Google Scholar but only in the last 30 years has its value been formally studied and its processes refined.2Lock S. A Difficult Balance: Editorial Peer Review in Medicine. Nuffield Provincial Hospitals Trust, London, England1985Google Scholar, 3Bailar J.C. Patterson K. Journal peer review—the need for a research agenda.N Engl J Med. 1985; 312: 654-657Crossref PubMed Scopus (89) Google Scholar Although it is an imperfect process, peer review helps readers by improving the completeness of publications, identifying articles with fatal flaws, and, perhaps, by parsing articles among journals of various reputation, rating them by importance and quality. This is all well and good, except for one minor problem: most clinicians are not reading the original research that constitutes the bulk of primary literature. The sheer amount of it, more than 800,000 citations added to PubMed each year,4National Library of Medicine. MEDLINE® citation counts by year of publication (as of mid-November 2015). 2016. Available at: https://www.nlm.nih.gov/bsd/medline_cit_counts_yr_pub.html. Accessed June 1, 2016.Google Scholar and the increasing complexity of study design and analytic methods, which render many readers incapable of judging a study’s value, have led most clinicians to rely on the secondary literature, which helps clinicians understand clinical research by synthesizing the primary literature in some way. Secondary literature is not new; textbooks of medicine existed long before there was a primary literature to synthesize, but in recent years it has been augmented with a variety of online resources, ranging from proprietary texts such as UpToDate to blogs such as EMCrit, Life in the Fast Lane, and Academic Life in Emergency Medicine (ALiEM). So if we have peer review for the primary literature that most clinicians don’t read, shouldn’t we have peer review for the secondary sources they do use? Physicians and trainees would certainly welcome informal guidance about which sites are worth their time. In fact, this is already done by the Emergency Medicine Residents’ Association,5Emergency Medicine Residents’ Association (EMRA). Recommended blogs and podcasts. Available at: https://www.emra.org/resources/recommended-blogs-and-podcasts/. Accessed June 1, 2016.Google Scholar who base their recommendations on an article in this journal by Thoma et al.6Thoma B. Joshi N. Trueger N.S. et al.Five strategies to effectively use online resources in emergency medicine.Ann Emerg Med. 2014; 64: 392-395Abstract Full Text Full Text PDF PubMed Scopus (38) Google Scholar It has also been done in a more formal way by ALiEM through their Approved Instructional Resources (AIR)7Academic Life in Emergency Medicine (ALiEM). Air Series grading tool—board approved. Available at: https://docs.google.com/spreadsheets/d/1Ou7YAjopZy2ncRV-oUYr3A08pwfR0MQrzEWIRxH5P34/edit#gid=0. Accessed June 1, 2016.Google Scholar and AIR Pro scoring systems.8Academic Life in Emergency Medicine (ALiEM). Air Pro grading tool. Available at: https://docs.google.com/spreadsheets/d/1Z1rjt8pyEZYKt2ZXcW70XohmmzUUGsgFjL2RctOyO-E/edit#gid=0. Accessed June 1, 2016.Google Scholar So the question is not whether secondary sources in emergency medicine should be rated—everything is rated in the current era—but by what means they will be rated. Options for a rating system for emergency medicine educational Web sites range from the subjectivity of a single critic’s opinion to the democracy of “likes” on Facebook or “hits” on Google Analytics to psychometrically validated “scientific” measures such as the ALiEM scores. So what is better, using anecdotal recommendations of personal colleagues or respected leaders, or relying on formal scoring systems such as ALiEM-AIR? In this issue, Chan et al9Chan T.M. Grock A. Paddock M. et al.Examining reliability and validity of an online score (ALiEM AIR) for rating free open access medical education resources.Ann Emerg Med. 2016; 68: 729-735Abstract Full Text Full Text PDF PubMed Scopus (43) Google Scholar report their evaluation of the psychometric properties of ALiEM-AIR. I will not critique their article because it has well described methods and findings and is generally quite good. Instead I want to contemplate why our society feels the need to make rating processes more “scientific,” especially in domains that are largely subjective, and to consider the potential harm of these efforts. I propose that the most desirable method of rating a domain will vary with what is being rated. Imagine an axis that on one extreme has highly technical issues such as automobile energy efficiency or the amount of torque one can put on a 6-0 suture needle before it bends, and on the other highly aesthetic issues such as what car is most stylish or who is the best physician on an emergency department’s staff. For technical topics, the best rating systems will be highly objective and, although ratings by such rating systems may not agree completely, differences should be small and explained by differences in the measurement techniques. The secondary literature, however, is at the other end of the spectrum. Sites vary widely in their purpose, educational style, and target audience. The appeal of such sites is highly dependent on aesthetics and taste, qualities that are not homogeneous across users. Is there really a need for a scientifically valid rating system for something as subjective as the secondary literature? One might argue that I am arguing about a trivial point: who cares if one develops a democratic, expert-critic-based, or scientific method for rating the secondary emergency medicine literature? But I worry that the desire to create a “scientific” rating system is emblematic of a wish to eliminate uncertainty and subjectivity in medicine. A modern illusion is that modern medicine can be wholly digital, that, if we have enough data, we can care for patients algorithmically, consistently delivering the best care. Similarly, the idea that if we only have the right rating system we can ensure that physicians spend their learning time optimally, looking only at those sites that offer the best information, is also a fantasy. These are wishful but fundamentally misguided ideas. Intuition and serendipity will always play a role in both patient care and medical education. There is a large literature that demonstrates that master clinicians are not using algorithmic thinking when making diagnoses. There is also a large literature that students learn in many different ways. As Wallace Stevens writes, the imperfect is, indeed, our paradise. Why, then, do we need to formally rank, sort, or codify educational Web sites when these materials are readily available and those using them should have the intelligence to choose among them? Although the desire to scientifically rate may be well intended, I fear the unintended consequences. Making this activity “scientific” is yet another example of how the desire for the appearance of objectivity and standardization is eroding the professionalism in medicine. Physicians are now constantly being dissuaded from making the subjective judgments that are the hallmark of professional activity. Managers tell physicians to automatically follow the sepsis bundle or the stroke protocol or 50 other routines if patients meet certain criteria. Although this may help some patients, the danger is that a culture of external control creates a disengaged physician who is no longer acting as a professional. The seemingly benign act of formally rating the secondary medical literature may have similar unintended consequences if it is heard as “turn off your brain,” “don’t use your own judgment,” or “be a good sheep and follow the herd.” From a training perspective, will the existence of a scientifically validated rating scale dissuade learners from developing their ability to be discerning, independent consumers? Casual ratings do not stop physicians from browsing, but formal ratings by a respected group may have greater influence. Is it possible that we cannot trust physicians to decide for themselves which blog they want to read but can trust them to decide whether a patient needs an emergency pericardiocentesis, a computed tomography angiogram, or thrombolytic therapy? If it was once reasonable for physicians to independently browse the physical bookstore, why do we feel the need to provide formal guidance in the virtual one? Chan et al9Chan T.M. Grock A. Paddock M. et al.Examining reliability and validity of an online score (ALiEM AIR) for rating free open access medical education resources.Ann Emerg Med. 2016; 68: 729-735Abstract Full Text Full Text PDF PubMed Scopus (43) Google Scholar have shown that it takes 9 physicians to reliably judge a secondary medical literature site. I remain hopeful that it really takes only one.

Referência(s)