Artigo Acesso aberto Revisado por pares

Evidence-based medicine

2013; Lippincott Williams & Wilkins; Volume: 75; Issue: 6 Linguagem: Inglês

10.1097/ta.0b013e3182932bac

ISSN

2163-0763

Autores

Mark T. Metzdorff,

Tópico(s)

Clinical Reasoning and Diagnostic Skills

Resumo

Mark T. Metzdorff, MD, Western Trauma Association president.We hear a lot these days about the concept of "evidence-based medicine." Six years ago, one of our most illustrious and intelligent presidents also made it the subject of a presidential address1. I will cover a little of the same ground, but bear with me and we will sail into some different seas, as I will address the use of non–evidence-based medicine in some aspects of modern medical care, rather than the exciting possibilities that Fred Moore described. I think we all have an idea of what evidence-based medicine means to us personally, but in fact, there is a definition that is accepted by some major organizations devoted to the study and promotion of the concept, and there is a large body of work by these organizations and others around the topic. "Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients."2 This relatively recent definition implies that the concept is modern, but look at the key word current, and one can see that the concept can be said to be timeless, for what is current changes as our knowledge base changes. To me, one of the things that is interesting about this concept is how physicians have practiced evidence-based medicine through the years. To illustrate this, I would like to take you back to the time of the Napoleonic wars, at the turn of the 19th century, when the English Royal Navy battled for the control of the seas. Some of you may be familiar with a series of novels by a wonderful author of historical fiction named Patrick O'Brian. Between 1970 and 1999, O'Brian produced the 20-book series, which aficionados call "The Aubrey-Maturin Series"3 (Fig. 2). Those not familiar with the books may have experienced a small taste of O'Brian's world through the movie adaptation "Master and Commander," released in 2003. The movie was an amalgam of several of the books, with small and larger pieces taken from them. The twenty books are chronological and span about 13 years of history, in which the characters experience global events as they sail around the world in the course of their duties. There are dozens of memorable characters and plot lines, which ebb and flow throughout the books, and there is incredible attention to details of the historical settings, the natural landscapes and seascapes, and the depictions of life at sea and on land in the time of Lord Nelson and Napoleon, and in the aftermath of the American Revolution. I commend these books to anyone who loves a good story, for they are as entertaining as they are informative. Patrick O'Brian created a masterwork at a level with the best historical fiction ever written and has been rightly celebrated for it.Figure 2: The Aubrey-Maturin Series, by Patrick O'Brian.Most people who have seen the movie assume that Captain Jack Aubrey of the Royal Navy, the character played by Russell Crowe, is the main protagonist. However, in the literary series, there are really two coequal protagonists. The second and to my mind much more appealing and interesting character is Jack Aubrey's dearest friend, Dr. Stephen Maturin. Of course, he is a surgeon! In fact, he is much, much more than a mere surgeon: He is a physician in the 18th century sense who treats all ailments, a naturalist who is a member of the Royal Society and regularly presents his work at Society meetings, a polyglot who speaks six languages fluently and is conversant in four others, a passably good cellist, a superb swordsman, a statesman, a gentleman, and a spy for the British Admiralty, and he is an absolutely inept sailor. He mangles the nautical names of the ship's components and cannot pass between the dock and the ship without falling in the drink. Thus, he is constantly looked after in this regard by his shipmates, who rightfully value him as a particularly renowned ship's surgeon and a man who might someday save their lives. Stephen Maturin is the illegitimate son of an Irish father and a Catalan mother and is described as short, swarthy, and unkempt, belying his supreme intelligence and quick wit. He is an unparalleled strategist in all of his dealings and, unlike most surgeons, is rarely wrong while never in doubt. He took his medical training in Dublin and Paris and was said to have "dissected with Dupuytren." It is universally agreed among fans of the books that in the movie, Maturin is relatively overlooked and badly cast. Paul Bettany, while a fine actor, bears no physical resemblance to the Stephen Maturin who is so well-known and beloved by fans of the series. What was Stephen Maturin's medical world of 1790? What passed for "evidence-based medicine" on board a ship of the Royal Navy? A ship's surgeon fulfilled many roles in such a vessel, and Captain Aubrey, while not as brilliant as his surgeon, demonstrated his understanding of the importance of the naval surgeon in the optimal function of a fighting ship by selecting Dr. Maturin as his man. Of course, a competent surgeon kept as many men as possible healthy enough to work the ship and fight effectively, but just as important, morale was much improved if the men knew they would be well cared for in case of sickness or injury. So the naval surgeon's duties encompassed both general health and trauma treatment. In the category of general health maintenance, Maturin was concerned with the prevention of scurvy, the treatment of venereal diseases and yellow fever, the quarantine of those with communicable diseases, and many other less surgical aspects. It is beyond the scope of this address to give an exhaustive treatise on the medicine of the day, but to illustrate aspects of the sea surgeon's practice in keeping with my theme of evidence-based medicine, I would like to use the examples of two issues that a naval surgeon of the 18th century would have dealt with, in the context of the evidence of the day, extremity trauma and the prevention and treatment of scurvy. As an example of what constituted evidence-based surgery to the 18th century naval surgeon, consider significant trauma to the extremity. With the exception of uncomplicated fracture, such an injury usually meant amputation as the most effective treatment to avoid the dreaded complication of gangrene, which in turn inevitably lead to death. The basis for this was centuries of observation, beginning with Hippocrates in the 5th century BC, that these wounds progressed to life-threatening infection that could be avoided or treated by amputation, although observers did not truly understand the pathophysiology. By the end of the 18th century, the great French military surgical pioneer Baron Dominique Jean Larrey, nicknamed "Napoleon's Surgeon," was said to have brought amputation to "the peak of advancement and perfection."4 This was the period of the Napoleonic Wars, which are the background for the Aubrey/Maturin books. Between these two physicians 23 centuries apart came incremental improvements in surgical technique and wound care including the use of the tourniquet and of vascular ligatures. However, Larrey, who made innumerable advances in battlefield surgical care, took the care of the injured extremity a step further by strongly advocating "primary amputation" in selected cases rather than waiting the more customary 3 weeks for suppuration to occur before amputation.5 In Napoleon's 1812 march to Moscow, Larrey participated in a battle with 13,000 French casualties in a 15-hour period. He was said to have personally performed 200 operations, mostly amputations, in the 24-hour aftermath—a bad night of trauma call. His vast experience lead him to promote the practice of early amputation when indicated, and he advocated saving the knee joint if not involved in the injury. Although on opposite sides of the political issue, he was supported in the idea of early amputation by the British surgeon George James Guthrie—"Wellington's Combat Surgeon," whose work would possibly have been known by Surgeons of the Royal Navy,6 and this evidence-based practice is dramatically illustrated in the movie when Stephen amputates the wounded arm of the 12-year-old midshipman Lord Blakeney the evening after the "young gentleman" is wounded in battle. This particular vignette, however, is found nowhere in the books, although Stephen is awash in blood after every major engagement, amputating limbs and spreading sand on the deck to keep his footing. On the medical side, the history of the conquest of scurvy is both fascinating and illustrative. Much of what follows about the history of scurvy comes from the excellent and interesting book Scurvy: How a Surgeon, a Mariner and a Gentleman Solved the Greatest Medical Mystery of the Age of Sail, by Stephen R. Bown7 (Fig. 3). Scurvy has been described as a disease of civilization, since it was only after men became capable of long voyages at sea that the condition began to be recognized. As you recall from your first year of medical school, scurvy results from lack of dietary ascorbic acid, vitamin C, necessary for the function of the enzyme prolyl hydroxylase, which hydroxylates the amino acid proline in the three alpha-chain collagen precursors, so that they can bind together to form the larger protein, collagen. All the observed ill effects of scurvy stem from defective collagen metabolism.Figure 3: Scurvy: How a Surgeon, a Mariner and a Gentleman Solved the Greatest Medical Mystery of the Age of Sail, by Stephen R. Bown.The first written description of the effects of scurvy is said to be in the journal of the explorer Vasco da Gama in his 1497 voyage around the Cape of Good Hope, but a later description by the Royal Navy Commodore George Anson is vivid: "The common appearances are large discoloured spots, swelled legs, putrid gums and above all an extraordinary lassitude of the body, especially after any exercise whatsoever; this lassitude at last degenerates into a proneness to swoon and even die on the least exertion of strength. This disease is likewise attended with a strange degeneration of spirits, with shivering, trembling and a disposition to be seized with the most dreadful terrors on the slightest accident." This was not a condition conducive to sailing a square-rigged ship around the world! During Anson's 1740 to 1744 circumnavigation voyage, 1,500 of the original 2,000 sailors perished, all but a handful to scurvy and starvation. This, amazingly, was typical of the day. Sea captains counted on lethal attrition of at least half their ship's company, mostly caused by scurvy. Stephen Maturin and Captain Aubrey encountered scurvy several times in the course of the 20 books, most notably in the novel HMS Surprise, when becalmed in the doldrums of the southern Atlantic, they ran low on fresh provisions.8 After finally escaping the doldrums, Stephen prevailed on Captain Aubrey to interrupt the pursuit of their foe and touch on the Brazilian coast for fresh fruits and vegetables, by showing him the physical effects of scurvy on the crew: swollen gums, old wounds reopening, old fractures reoccurring. Scurvy was a dreaded and constant companion on sea voyages for more than 400 years. During that time, many theories about its origins, treatment, and prevention were promulgated. Most were useless, based not on evidence as we know it but rather based on simple observations or elaborately constructed systems, which had been formulated to try to explain the processes that physicians, physiologists, and anatomists thought they observed at work in the human body. The theories about humors, fluids, circulation, and obstruction dating to the Greeks were still in use in the 18th century, and so by various authorities, scurvy was said to be caused by "bad quality of air," the lack of "the honest company of one's lawful wife," "an infection of the blood and liver," "putrefication" of digested food, and "lazyness and sloth," the latter mistaking the symptoms of the disease for its cause. Obviously, attempts to prevent or treat scurvy based on these principles were ineffective. Meanwhile, the terrible toll of sickness, death, and loss of expensive ships and cargo continued. However, as we have seen over the centuries, great medical breakthroughs have often come about as a result of war or other threats to the treasury of empires. One such breakthrough, the effective prevention and treatment of scurvy, came about as a result of the Royal Navy's response to the toll the disease took on its sailors and ships. Dr. James Lind was a physician and surgeon in the Royal Navy in the mid-18th century. In the course of his duties, he naturally developed an interest in the prevention and treatment of scurvy, and in 1747, at age 31 years, he conducted one of the first controlled trials in medical history. Dr. Lind, surgeon on HMS Salisbury, took 12 scorbutic sailors with advanced disease and divided them into 6 pairs, each pair isolated in a separate compartment of the ship. All were fed the same controlled diet, but each of the six pairs was given one of six conventional or proposed antiscorbutic regimens. These were as follows: (1) "cyder," a slightly alcoholic fruit derivative, one quart daily; (2) elixir of vitriol, a blend of sulfuric acid, alcohol, and aromatic spices, 25 drops thrice a day; (3) two spoonfuls vinegar thrice a day; (4) sea water, half pint daily; (5) two oranges and a lemon daily; and (6) a "nutmeg" sized dose of a paste of garlic, mustard seed, dried radish root, balsam of Peru, and gum myrrh, thrice a day. One can surmise that in most of his experimental groups, he was testing the popular hypothesis held by the great physician Boerhaave, among others, that scurvy was the result of "putrefication" in the body of digested food and that this could be countered by acidic remedies. Indeed, elixir of vitriol was the conventional treatment of scurvy in the Royal Navy at that time. Although Lind's use of sea water sounds to modern ears like a placebo control, in fact, he later wrote that he had heard of many instances where salt water was given with great benefit, and with an unlimited supply in the ocean, he was likely hopeful that it would prove to be so. As one would expect, the lucky pair who feasted on citrus fruit showed dramatic improvement, although Lind ran out of citrus fruit halfway through the 2-week trial. At the end of 2 weeks, one of the two men was certified fit for duty, and the other was nearly recovered. Of the other five groups, only the cider group showed any evidence of benefit, and that was merely a slowing of deterioration compared with the others. Lind apparently was unable or unwilling at first to promote his findings but waited until he had become a successful private practitioner on land to publish, in 1753, his book A Treatise of the Scurvy. It is likely that Lind's clinical trial and subsequent book were stimulated in large part by knowledge of the disastrous toll scurvy took on Anson's global voyage of 1740 to 1744. Lind's book was largely ignored. In a pattern that has remained prevalent throughout history even to today, the effective preventive treatment discovered did not take hold for years; among the reasons suggested is that other prominent and respected physicians of his day disparaged his findings and theories in favor of their own. This criticism was in part justified, as Lind's own attempts to explain his results were understandably incoherent. Another major reason citrus juice was unable to take hold as a remedy was that in an effort to find a source that could be preserved on long voyages, the juice was concentrated by boiling it into a fluid called "rob." This fluid was ineffective as the vitamin C had been inactivated by heating. Trials of "rob" were dismal failures and further served to confuse the investigators, ignorant of the true mechanism of benefit of fresh juice. Lind published two subsequent editions of his treatise but died not really understanding what he had learned from his trial. The final story of the conquest of scurvy encompassed the subsequent global voyages, from 1768 to 1780, of the great explorer James Cook, during which by careful attention to diet, not one sailor died of scurvy; and finally by another great naval surgeon, Sir Gilbert Blane, who picked up the threads of Lind's work and ultimately used his influence to convince the leaders of the British Admiralty that unprocessed lemon juice was both a preventative and cure for scurvy. Beginning in March 1795, 1 year after Lind's death and 48 years after his controlled trial, sailors in the Royal Navy were given a daily dose of citrus fruit or juice, often in their rum ration, and scurvy was a thing of the past for those fortunate sailors. As a result, the Royal Navy remained the preeminent sea power and defeated Napoleon, and history was changed. Fortunately for us, this occurred after the American Revolution. So there you have two examples of what we might call evidence-based medicine for the 18th century naval surgeon. In that era, much of what Doctor Maturin and his colleagues did was based on ancient, unproven theories, empirical observation over centuries, and was limited by a lack of effective drugs and techniques. Communication of medical knowledge was also severely limited by the technology of the time, with only the poorly distributed printed word, difficult travel, political conflicts both within and between nations, and few practitioners of the healing arts. How has the concept of evidence-based medicine evolved over the ensuing three centuries? Again, a comprehensive review is beyond the scope of this presentation, but I can touch on some highlights before concluding with some personal observations of the state of evidence-based medicine in the current era. The 19th century saw many advances in medical care: Vaccinations, anesthesia, and concomitant advances in surgery, germ theory, and antisepsis to name but a few. For the most part, these advances occurred in the historic paradigm of observation and empiric testing with trial-and-error methodology. A necessary component in the transition to what we call evidence-based medicine today was the development of statistical methods of analysis, which also had roots in the 17th century. It was the 1601 edict by King James I of England to mandate parish registers—detailed records of baptisms, marriages, and deaths—that provided perhaps the beginnings of the first "database" of the epidemiologic activity of humans.9 An excellent brief essay on the history of statistics in medicine is included in the text Essential Evidence-Based Medicine by Mayer.10 Early works on probability by mathematicians Huygens and Pascal around 1660 set the stage for comparative analysis. With his 1662 publication, British merchant John Graunt pioneered statistical sampling of the London population to determine death rates and estimate risk of dying of various maladies, again drawing on detailed parish records and, in 1665, applied these methods in an analysis of the spread of the plague in London. In 1836, Pierre Louis published his work on the effect of bloodletting in inflammatory diseases, demonstrating "narrow limits to the utility" of therapeutic bleeding in the treatment of pneumonia by comparing populations of patients bled at different times in the course of the illness.11 In 1854, John Snow, using epidemiologic techniques, localized a cholera outbreak to a single contaminated water pump in London.12 However, despite these and other early examples of the application of statistics and epidemiology to medical practice, 19th century medicine remained for the most part mired in empiricism and experience. The "art of medicine" lay in the ability of the clinician to use his personal knowledge or preference to recommend a treatment or to seek a similarly empiric opinion from a learned consultant. In the 20th century, the elements of statistical analysis were advanced and applied to medicine, finally allowing the development of evidence-based medicine as we know it. The familiar Student's t test was introduced in 1908, at first as an adjunct to control quality at the Guinness brewery. Other statistical methods also came to be applied to medical and epidemiologic research—the London School of Hygiene and Tropical Medicine was a particularly fruitful place.13 In 1931, Woods and Russell of this institution published An Introduction to Medical Statistics, and in 1937, their colleague Austen Bradford Hill authored a series of articles in The Lancet on the use of statistical analysis in medicine, later compiled into a book, Principles of Medical Statistics. Hill has rightfully been called the greatest medical statistician of the 20th century for his contributions to the conduct of medical research. By 1947, he was calling for inclusion of statistics in the medical curriculum. He went on to introduce the first randomized controlled clinical trials (RCTs), one of which demonstrated the superiority of streptomycin to standard clinical therapy for tuberculosis, more than 200 years after Lind's scurvy experiment.14 More recently, Sir David Cox, noted for his proportional hazards model, and other individuals have carried medical statistics into the 21st century. The term evidence-based medicine was first used in the early 1990s. A nice summary of the modern history of evidence-based medicine is contained in a 2003 article by Cohen et al.,15 which is actually a critique of the concept, published around 10 years after it took hold. The roots of contemporary evidence-based medicine perhaps began with the 1972 publication of the book Effectiveness and Efficiency by Archibald Cochrane, in which he claimed that many of the tests, therapies, procedures, and other medical interventions in use had no good evidence to support their use or effectiveness and might in fact be more harmful than useful.16 At that time, RCTs were still a relative rarity, but Cochrane promoted their use as the best means of assessing the validity of a medical intervention. By 1985, a group of epidemiologists from McMaster University had responded to Cochrane's challenge by developing new methods of analysis and publishing the textbook Clinical Epidemiology, which discussed the application of epidemiologic evidence in the guidance of clinical medicine.17 By 1992, an article in JAMA by "The Evidence-Based Medicine Working Group" heralded "a new paradigm for medical practice."18 This group of mostly McMaster clinicians with others from the United States and Canada described the "former paradigm"—(1) unsystematic observation; (2) study and understanding of basic mechanisms of disease and pathophysiologic principles; (3) thorough traditional medical training and common sense; and (4) content expertise and clinical experience—as a traditional emphasis on authority and the opinions of experts. The new paradigm was described as (1) caution in the interpretation of information derived from clinical experience; (2) knowledge of mechanisms of disease and pathophysiology as necessary but insufficient to guide therapeutic or diagnostic efforts; and (3) understanding rules and levels of evidence as necessary to correctly interpret the literature, which is central to making informed decisions about interventions. Proponents of this new paradigm were fervent in their zeal to explore and apply it, aided and abetted by the burgeoning new resource known as the World Wide Web. Their seemed to be no limit to what might be accomplished: improved patient care, huge cost savings, reduced liability, revamping of medical education, among many other possibilities. However, there were contrary voices, and by 1996, one of the original Working Group, D.L. Sackett, now at Oxford, was defending the concept in a British Medical Journal article, "Evidence-based medicine; what it is and what it isn't".2 He pointed out that in the short interval, the concept had exploded with workshops devoted to it, curricula revamped to incorporate the teaching of evidence-based medicine, journals dedicated to the topic, and increased attention in the lay media. In addition, one of the biggest developments was the founding in 1993 of the Cochrane Collaboration, an international network of clinicians and colleagues formed to "prepare, maintain, and disseminate up-to-date reviews of randomized controlled trials of health care." The group was named in honor of Archie Cochrane, who first gave widespread attention to the value of the highest level of medical research evidence. The Cochrane Collaboration and its main product, the Cochrane Database of Systematic Reviews, has become a standard resource for those looking for guidance in medical therapy based on the best available research evidence.19 There are more than 5,000 reviews on topics across the spectrum of medical interventions. What is striking to me, however, as one peruses the reviews, is that in so many of the abstracts (which are available free online), the conclusion is that there is no good evidence to support or refute the hypothesis being reviewed. Our Dr. Moore explored the shortcomings of the Cochrane Reviews in greater detail.1 At least by making us more aware of the shortcomings of the medical evidence, the Cochrane Reviews provide ample fodder for the budding researcher. Of course, not all interventions can practically be evaluated by an RCT. Does that mean we cannot make an informed judgment about a therapy? Of course not, but the level of evidence will be lower. Other databases have been significant tools of evidence-based medical research. Trauma surgeons are compulsive collectors of clinical information, and no trauma center is without its detailed database. In my specialty, the Society of Thoracic Surgeons (STS) database of cardiac surgery was the first and remains arguably the most celebrated of national clinical databases.20 It currently encompasses more than 4.5 million patients and has given rise to more than 100 valuable studies, although most are by default retrospective, although as in trauma, the data are collected prospectively. All is not well, however, in the land of Dr. Oz. (For those of you who do not follow Oprah Winfrey, Dr. Mehmet Oz is a respected and telegenic, cardiothoracic surgeon from Columbia University who has quite successfully transitioned into the mass media as a health expert.) What follows is my opinion alone, but it is one that I believe is shared by many, even some leaders in our specialty. Despite the heritage of data-driven advances in care in thoracic surgery, I believe there has been a growing problem in my specialty and, for that matter, in other surgical specialties, of departure from principles of evidence-based medicine. I will speak from my own experience and from knowledge unique to my specialty and will use two prominent examples I will briefly mention: the evolution of beating heart surgery and robotic cardiac surgery. As you know, the evolution and subsequent explosion of cardiac surgery occurred as a result of two events: the invention and perfection of the heart-lung machine ("the pump") in the 1950s and early 1960s and the development and perfection of techniques for heart valve replacement and especially coronary artery bypass surgery through the 1970s. There are a number of past presidents of this organization that have benefited from these advances. A nagging problem with these techniques was the occurrence after surgery, in a small percentage of patients, of subtle or less subtle neurocognitive defects including confusion, memory loss, and thought disorder. Fortunately, in the majority of patients, these were self-limited and resolved in a matter of weeks, but some patients had persistent problems. It was the theory of some surgeons that the use of the pump, which is in many ways a very unphysiologic form of circulation of the blood, was the culprit in creating these neurologic deficits and perhaps other deleterious effects. In the mid-1980s, some surgeons from South America reported early experiences in doing coronary artery bypass surgery (CABG) using techniques for stabilizing the beating heart, allowing grafts to be performed without the need for the pump supporting the circulation.21,22 In 1989, a surgeon from San Diego named Steve Gundry and his colleagues adapted these techniques for about a year. In a 1997 long-term follow-up report of the nonrandomized series of 219 patients, about half performed conventionally and half performed off pump, he found no difference in outcomes between two experimental groups and concluded that beating heart surgery was not an improvement; in fact, the beating heart group received one fewer graft per patient and required more subsequent revascularization procedures by a factor of 2:1.23 Other surgeons, however, did not accept Dr. Gundry's conclusions and began developing better retractors, heart stabilizers, and other tools to facilitate beating heart surgery. Beginning in the mid-1990s, there ensued a two-decade long effort to try to prove that beating heart surgery was superior to the standard CABG operation. This race to transform the conduct of CABG was aided and abetted by a medical industrial complex that was happy to partner with entrepreneurial surgeons to develop expensive disposable tools. These efforts would have been fine, if the development and testing of these techniques had been performed in an evidence-based way, with appropriate multicenter randomized controlled trials performed early on, but they were not. Surgeons enamored of the new technology published their personal experiences, many after going thru "the learning curve," (which is another way of saying patient jeopardy or even harm) and compared outcomes with historical controls, with contemporary outcomes by different surgeons, and other lower levels of evidence. When appropriate, careful, large-scale, multi

Referência(s)