Artigo Acesso aberto Revisado por pares

The Epistemology of Clinical Reasoning

2000; Lippincott Williams & Wilkins; Volume: 75; Issue: Supplement Linguagem: Inglês

10.1097/00001888-200010001-00041

ISSN

1938-808X

Autores

Geoffrey R. Norman,

Tópico(s)

Clinical Reasoning and Diagnostic Skills

Resumo

Physicians' clinical reasoning has been an active area of research for about 30 years. The goal of the inquiry has been to reveal the processes whereby doctors arrive at diagnoses and management plans (although as Elstein correctly points out in his discussion of this paper,1 the focus has been more on the former than on the latter) so that we could use this information to devise specific instructional strategies or support systems to make the acquisition and application of these skills more efficient and effective. Initially, these "clinical reasoning skills" were conceived as general, and content-independent, so that they could be observed in all clinicians working through any problems. That is, they were thought of as a general mental faculty, presumably rooted in the architecture of the mind, which would be brought to bear on solving clinical problems. However, the research findings did not support this viewpoint. Elstein and Shulman2 showed that whatever clinical reasoning was, it was definitely not skill-like, in that there was consistently poor generalization from one problem to another, a finding that ultimately sounded the death knell for evaluation methods such as patient management problems. The past 30 years have seen an accumulation of evidence, in medicine and many other disciplines,3 about the nature of the process, and shown the importance and centrality of knowledge. The central issue of this revised research program is achieving an understanding of how knowledge is initially learned, how it is organized in memory, and how it is accessed later to solve problems. A second research program in medical decision making also emerged from research of the early 1970s. As Elstein discusses in the companion paper, this program "views diagnosis making as opinion revision with imperfect information."1 From the decision-analytic perspective, the best decisions arise from the application of a statistical decision rule to data; any other method is suboptimal. Thus, the research agenda is directed to identifying areas such as medicine where humans function in a suboptimal way, and attempting to understand the strategies, the heuristics and biases, they apply to arrive at these suboptimal decisions. Elstein states that "it seems to me that decision theory is at least as promising as the study of categorization processes." He may well be correct. But the two schools highlight a fundamental epistemologic dilemma that the remainder of this paper addresses: Will we understand more about the nature of clinical diagnosis by focusing on the diagnostician and striving to understand the mental processes underlying diagnosis, or by focusing on the clinical environment and attempting to understand the statistical associations among features and diseases? To what extent is the world of clinical reasoning "out there" and comprehensible by understanding the relation between symptoms and diseases, and to what extent is it "inside" and understandable only by examining mental processes in detail? Further dilemmas face us as we examine the research in clinical reasoning. "Organization of knowledge" is viewed as a critical determinant of expertise in medicine. But it is not really clear what is meant by organization of knowledge. Is knowledge organized hierarchically with general concepts at the top, more specific scripts in the middle, and specific instances at the bottom?4 Is it organized in networks with nodes and connections,5 as a symptom-by-disease matrix,6 as propositions with causal links,7 as collections of semantic axes,8 or as individual examples with no overarching concepts, as some of my earlier research claimed?9 A perusal of these various studies leaves the reader with only one overall impression—that the human mind is incredibly flexible and can organize and reorganize information at will and seemingly effortlessly to give the researcher exactly what he or she wants to hear. It is no coincidence that propositional networks are disturbingly idiosyncratic and not apparently reproducible.5 My view is that all of these concept architectures are produced on the fly at retrieval, in order to satisfy the expectations of the researcher, and none can claim special status as the way knowledge is organized. Do you want the clinician to tell you the probability that myocardial infarction (MI) will present with referred pain to the back? Can do. The nature of the neural pathways linking the heart and the upper arm? Sure. The hair color of the last patient they saw with an MI? Red. Given this incredible diversity of knowledge from specific to general, it seems likely that any attempt to uncover a representation of knowledge consistent with a particular perspective from fairly directive probes will be successful; however, the ultimate form of this knowledge (if that is even an issue worth addressing) will remain elusive. Still, if the clinician's mind is really that malleable, then this poses a serious challenge to the research tradition. Are there really any more "basic" or "primitive" forms of knowledge? How can we understand the nature of clinical reasoning if it appears to be this flexible? These were the questions that presented themselves as I reviewed the studies of clinical reasoning. As I thought about these issues, I began to explore other perspectives on the nature of knowledge and knowing from philosophy, psychology, and neuroscience, and started to identify common threads that, I think, can shed some light on these questions. As I did so, I found myself moving back and forth among three kinds of knowing, more or less from specific to general: How does the clinician come to know about diseases? How might diseases be represented in his or her mind? How do we as researchers come to understand domains of science, whether these are the diseases of clinical research or the workings of the clinician's mind? What do we mean by knowing? What do we mean when we say we understand something? In the remainder of this article I roam freely among these levels, since many of the writings I uncovered inform all levels. But I must begin with a disclaimer. My journeys in this field are as an amateur, and are recent. I have been heavily influenced in my interpretations by two books. The first is Lessons from an Optical Illusion, by Hundert,10 who took the brave step of trying to find links among philosophy, psychology, and neuroscience. His goal was to place ethics in a context of these disciplines; mine is to turn these general truths to an understanding of clinical reasoning. A second major influence on my thinking is a book called What is this Thing Called Science? by Chalmers11 —a wonderful and readable review of classical philosophy and philosophy of science. I highly recommend both. The starting point of my discourse is a critical examination of the concept of disease. My intention is to use the exploration of disease as a case study of how we come to know about things. What is a Disease? Through advances in biology, physiology, and molecular biology, we have come to a deep understanding of the mechanisms of many diseases. It seems almost nonsensical to now turn the clock back and ask what a disease is. But this small departure may serve us in good stead in understanding better what a concept is and how people identify concepts. Let's take two examples: Is syphilis a disease? Absolutely. It fits the medical model to perfection. A bacterium invades the host, stimulating a diversity of processes that ultimately are manifested in clinical signs. Osler said "understand syphilis and you understand all of medicine." But there is a small historical glitch. Syphilis has been with mankind for millennia and the signs and symptoms were well established long before the bacterium was isolated. Is heart disease a disease? Yes. Put a label such as anterior myocardial infarction on it, and it looks even more like a disease. But likely we are all harboring the precursors of ischemic disease as cholesterol plaques slowly accrue in our arteries. So in a manner of speaking, the prevalence of heart disease approaches 100%. Can we then still speak of it as a disease? And by the way, although there are many risk factors for heart disease, there is no clear cause. The same is true for cancer. We can easily identify cancerous cells on pathology slides, and we can correlate the clinical course with the accumulation of malignant lesions, but we all have microscopic tumors in our thyroids, and a third of men who die of unrelated causes are found to have prostate cancer. All of these things seem disease-like because we can "explain" them at some lower level—plaques, bacteria, malignant cells. But there are many other diseases listed in textbooks that have no clear causes, no microscopic correlates, no known mechanisms. And it is well to bear in mind that although anthropologists and historians have identified evidence of (for example) tuberculosis dating back several thousands of years, and although old writings in medicine clearly describe the symptoms and clinical course of tuberculosis, the cause, the tubercle bacillus, was identified, by Koch, only as recently as 1884, and effective therapy has been available only since the 1940s. So the existence of a causal mechanism is hardly sufficient to claim that something is a disease. More generally, it is likely that exceptions to any definition of disease will be common. Campbell et al., in a classic article, "The Concept of Disease," reported presenting clinicians and lay people with a series of medical conditions and asking them whether or not they were diseases.12 Perhaps not surprisingly, doctors were more prone than lay people to call things such as lead poisoning and tennis elbow diseases. But there was otherwise quite good concordance. Infectious diseases—malaria, tuberculosis, syphilis, polio—topped the list. Other common or serious medical problems—lung cancer, diabetes, multiple sclerosis, cirrhosis—came next. At the bottom were things such as hangover, senility, heatstroke, tennis elbow, and drowning, which had English, not Latin, labels. These authors concluded that the features that best predicted the labeling of a condition as a disease were that the condition (1) was associated with an abnormality of structure or function (i.e., it had a "cause") and (2) was likely to be treated by a doctor. The latter was the stronger determinant, but regrettably, this seems tautological. Since doctors are in the business of dealing with disease, describing a disease as something that doctors deal with does not, in my view, advance our understanding much. Let us consider the first predictor for a moment. Arguably one simplistic but functional view is that if a condition simply represents a cluster of signs and symptoms (for example, carpal tunnel syndrome, low back pain) it is less disease-like. Presumably this reflects a concern that a condition's features and associations among the features may be an illusory correlation (which humans are particularly good at making)13 and not "real." There is good reason for such a degree of skepticism. Historically, many syndromes that existed 100 years ago, such as self-pollution, have now disappeared, and there is every indication that many contemporary syndromes, such as chronic fatigue, sick-building syndrome, Gulf War syndrome, and the myriad health problems believed to be caused by breast implants may go the same way. Conversely, the ability to explain disease through some underlying mechanism lends authenticity to it. Angina becomes much more believable if we can find narrowing of the lumen of the coronary artery on angiography, even though the association with the clinical manifestations is weak. The Role of Basic Science If we view the identification of the features of a disease as analogous to the findings of an experiment (in this case, an experiment conducted by a malicious deity) then one basis for distinguishing a disease from a non-disease is the extent to which the features can be explained by a scientific theory. Thus the infectious diseases are explained by a noncontroversial, and historically verified, theory of host and parasite. Chronic diseases such as atherosclerosis are a bit less disease-like since the theory underlying them is less secure. And as we move to syndromes such as chronic fatigue syndrome, we are less inclined to view them as diseases because no satisfactory scientific mechanism has yet been found to explain their features. Turning to clinical reasoning, investigators such as Schmidt14 and Patel,15 in studying the role of basic science in clinical reasoning, have found repeatedly that clinicians rarely invoke mechanistic explanations. But as Schmidt has shown, the fact that they need not invoke mechanisms does not mean that they do not know them—the knowledge is available but is only rarely used. As he describes it, the knowledge is "encapsulated." While basic science may play only a minimal role in day-to-day-practice, it is arguably the only, or at least the major, route to understanding in this domain. Of course, basic science need not be restricted to biology. In the same way, the basic science of epidemiology was fundamental to understanding the transmission of AIDS, just as Snow in the 1880s understood the mechanism of cholera transmission (the London water supply) long before the bacillus was isolated. I believe we can now posit an explanation for the paradoxical findings of Schmidt and Patel. In the normal course of events, clinicians making diagnoses deal at the syndrome level, where the nature of the causal mechanism is irrelevant. The history and physical exam are directed at revealing the syndrome-like manifestations, which then point to tests directed at the underlying processes, and therapy. The textbooks of clinical diagnosis for "old" diseases probably have not changed much since Osler's time. The signs and symptoms are pretty well what they have always been, although of course some historic scourges—smallpox, diphtheria, cholera—are now nearly unheard of in the West, and others, such as AIDS, have taken their place. But despite the changes in our understanding of disease, the clinician attempting to make a diagnoses is dealing almost exclusively at the syndrome level. Occasionally, some understanding of underlying processes may help to sort out some conundrum, but one suspects that clinicians appear rarely to use basic science simply because their investigations of history and physical are directed to labeling the syndrome. Clinical reasoning reverts to a historically earlier form of the disease, following the biologic dictum that ontogeny follows phylogeny—the fetus passes through all stages of evolution before birth. Campbell13 elaborated the notion of disease in philosophical terms, describing two basic positions: the "nominalist" perspective and the "essentialist" perspective. In the nominalist view, a disease is simply a collection of abnormalities that appear to arise together. Thus the historical diseases of dropsy, consumption, and plague were recognized long before any causal agent was detected, although etiologies (such as "bad humors") were advanced. Conversely, the essentialist perspective presumes that the signs and symptoms arise from pathologic processes that can be identified and hopefully rectified. While it is tempting to place these two views in a historical order, the contemporary examples we have discussed indicate that the two perspectives represent extremes on a continuum, which, as we shall see, has parallels in both philosophy and psychology. What is a Concept? Lessons from Philosophy We can make some general observations about the concept of disease. First, a disease, like any concept, does not exist entirely "out there" but rather, to some degree, is a mental construct. Second, the category or concept called "disease" is not an all-or-none proposition; rather, particular exemplars have different degrees of disease-ness. Finally, it is awfully difficult to devise an explicit rule to aid in distinguishing between diseases and non-diseases. A rule such as "diseases are what doctors deal with" works quite well but is singularly uninformative. And we sense, without proof, that any rule we may devise is not going to be coldly analytic, but must have sub-rules such as "the more Latinesque it is, the more disease-like it is." So ironically, while it is relatively easy to devise rules to determine whether someone has a particular disease (although I will go on to show that the rules are not the whole story), it is a lot harder to devise rules for the overarching category called "disease." These issues are not at all specific to disease, but rather are part of a large body of knowledge extending in space across at least three disciplines—philosophy, psychology, and neuroscience—and in time as far back as Plato. To explore this further, I now venture (with considerable trepidation) into a more general inquiry into the nature of concepts. I begin by revisiting some philosophical views on the nature of concepts. The origin of concepts has been, in some sense, a nature—nurture debate.9 However, this argument has focused not on whether human traits are inherited or learned (the usual spin on nature versus nurture), but rather on whether categories or concepts such as beauty, disease, table, or tree exist "out there" to be learned by individuals as they develop and mature (which would suggest that an individual's knowledge is formed from experience [nurture]) or are essentially a product of the mind (we impose order and category boundaries where none exists, as a result of the biological structure of the mind [nature]). A casual reading of any philosophy textbook reveals that this issue has been a central concern through the ages of the great minds—Plato, Aristotle, Descartes, Hume, Kant, etc. Let us briefly review the historical debate in mainstream philosophy, with a view to showing how thinking in philosophy can help to frame our perspective on clinical reasoning. Modern philosophy began with Descartes, who emerges as the ultimate skeptic, and whose views have retained central status as the universal straw man for all his successors. His famous statement "cogito, ergo sum" (I think, therefore I am) has been a lodestone for philosophers and t-shirt makers for three centuries. Regrettably, this idea has been almost universally misunderstood. Most interpret it as a statement of the ultimate rational man; our humanity is defined in terms of our capacity for rational thought. Unfortunately, the statement had a much more humble meaning for Descartes. In continuing to question whether one could justify any external reality, to devise any conclusive argument for the existence of objects such as dogs and tables, Descartes was led to the desperate conclusion that the only thing he could be really sure of was his own thoughts. I think, therefore I am. The antithesis of this position was championed by the English empiricists Locke and Hume. Their view was that the mind was a tabula rasa, a clean slate on which one's experience with the world was written. This interpretation seems perfectly acceptable for sensory experience, but is more difficult to sustain for higher concepts such as causation, temporality, or, for that matter, disease. Hume's resolution was to suggest that these notions emerged as a result of experience. Kant reframed the issue in a way that is central to our subsequent journey through psychology and neuroscience. He recognized that thoughts can occur only as products of interactions between the mind and the external reality of experience; we construct experience. He maintained a rigid boundary between those properties that our minds bring to experience (which are hardwired) and those that emerge from experience. He eventually created a list of 12 "primitives"—object, causation, temporality, and nine others—that he claimed the mind imposed on the world of experience. Hegel went one step further and recognized that the external world can influence the categories and labels we apply. The categories themselves do not emerge from our minds, but are influenced by the objects of our perceptions. The mind is not simply a clean slate upon which all experience is written in coherent form (Hume); nor is it the case that there is no uniform order in the outside world and that all concepts are mental inventions (Descartes); nor finally does the mind impose fixed structure or constructs on sensory experience (Kant). Instead, the concepts and the content both grow and evolve ("become") as a consequence of the interaction between the individual and the environment. Finally, in this century, Wittgenstein extended these ideas further. He proposed that not only are concepts not fixed, they also are not definable by any set of logical rules. In pondering even commonplace concepts such as "dog," he realized that any attempt to devise rules is doomed. A dog has four legs—but if one is amputated it's still a dog. A dog barks—except an Egyptian Basenji. A concept—whether an abstract concept such as truth or a mundane concept such as dog, fork, or tree—emerges as a matter of "family resemblance." Robins are more bird-like than penguins; malaria is more disease-like than alcoholism. Wittgenstein proposed that concepts or categories are derived from family resemblances, not from fixed sets of defining attributes. Thus the philosophy of concepts evolved from a Cartesian view, which is entirely intra-psychic and questions any external reality, and an empiricist perspective that presumes that all order and concepts exist as natural categories to be discovered by the human observer, to a Kantian interaction, in which the mind provides the categories or concepts and the external reality provides the objects to fill the categories, to a Hegelian perspective, which is much more organic, and in which thoughts and concepts themselves evolve and change as a result of interactions with external reality. Ultimately, we reach the perspective of Wittgenstein, which places even fewer constraints on concepts, which are a matter of family resemblance and thus can be elaborated only through extensive experience with the world's families. Applying these notions to clinical reasoning, philosophy presents a larger framework in which to view our dilemma in defining a disease. To the extent that a disease is a concept, philosophy buttresses the middle ground between the notion that diseases exist entirely "out there" only to be discovered and learned and the notion that they are probably simply mental constructs. We can then think of the concept of disease as arising from an interaction between the thoughts of the perceiver and regular aspects and associations of the environment. Further, some diseases, such as syphilis, are more central members of the family; others, including the syndromes, are more peripheral. As we shall see, this formulation finds remarkable support in research in both psychology and neuroscience, to which I now turn. What is a Concept? Lessons from Psychology One division in psychology has been preoccupied with the same issue as the philosophers: how do people learn concepts such as table, dog, or truth? But instead of relying entirely on reason for understanding, psychology seeks evidence to understand how people create and learn concepts. Perhaps in the course of doing so, psychologists deliberately skirt some of the tough epistemologic issues that preoccupy philosophers. On the other hand, in my own reading, I was struck by how the one informs the other. A simple example: The Müller Lyer illusion,16 shown in Figure 1, is pretty well known to all. We see the one vertical element as being longer than the other. Even though we can measure them and show them to be the same, the illusion is inescapable—a fine example of how we impose order (sometimes biased order) on the external world. But psychologists have gone further with this illusion, and questioned precisely why it is an illusion. In the course of doing so, they provide a nice illustration of Hegel's interactive model of mind. One hypothesis is that it is an illusion because our minds are seeing it in three dimensions, so that the symbol on the left is seen as the outside corner of a wall nearest the viewer, and the one on the right is seen as the inside corner of a wall farthest away from the viewer. Although the two vertical lines are objectively the same size, since the one on the left is seen to be nearer than the one on the right, the right one is "actually" longer. Deregowsky17 tested the illusion in Zulus, who spend their lives in round houses, and found that they did not see it as an illusion. So, it is not an illusion because our brains are "hardwired" to see it as such (unless Zulus have different hardwiring); it is an illusion because of the particular experiences we have had with the world. On the other hand, the illusion reminds us that our perceptions do not necessarily mirror reality, as they are also shaped by internal assumptions (in this case, about perspective and the inference of a third dimension from the two-dimensional representations on the retina) that sometimes lead us astray.Figure 1: Müller Lyer optical illusion.A second example from psychology leads us closer to our central concern with clinical reasoning. Most of us have, at one time or another, wondered whether the "red" we see is the same as the red seen by the person beside us. While the differences in perception are rarely likely to be as extreme as in the case of a childhood friend of mine whose color blindness was detected when he went to school and repeatedly drew green reindeer at Christmas, we have no real way of ever verifying the universality of "red." Is it just a linguistic device, or a cultural norm? After all, at some time we all had to learn, from our parents or friends, what red was. Perhaps it differs in different cultures. These questions, as they begin to cross the boundary between philosophy, psychology, and learning, are of more than passing interest. Much of the fundamental work in concept formation has been done by Eleanor Rosch.18 One area she studied was how colors are identified in different cultures. While, on the one hand, there appear to be small cultural differences in the boundaries between colors (e.g., the Navaho have only one word for blue and green (no wonder, with all that turquoise jewelry around),10 Rosch showed that all cultures were unanimous in their choices of the best examples of red, yellow, or green. Even more interesting, Rosch discovered a primitive tribe, the Dani, who had words for "bright" and black only. She then taught them words for colors, using Dani words (e.g., tree) that were unrelated to color. One group learned the "primary" colors such as fire-engine red; the other learned Dani words for intermediate colors such as turquoise. The group learning red, yellow, and blue learned the associative words rapidly and effectively; the other group never did master the associations. Studies of this type provide support for the contemporary notion in philosophy that categories and concepts derive from our experience of the world; indeed there is surprising uniformity to these concepts in precisely those areas where we might expect that experience (such as the experience of color) is also universal. Prototype theory was perhaps the first theory of concepts to be seriously applied to clinical reasoning. Bordage and Zacks19 used many of the methods of Rosch to demonstrate that the same kind of graded structure that distinguished the natural categories was present in disease categories. They found, for example, that diabetes was a much more prototypical endocrine disease than Hashimoto's disease or hyperthyroidism. It was volunteered more often by practitioners asked to name as endocrine disease, recognized more accurately and quickly, and so on. These studies lead to two conclusions: first, there is evidence to substantiate our musings at the beginning of this talk that the concept of disease is a continuum, not a category. Second, the identification of conceptual prototypes such as diabetes, carrot, and robin, which transcend different cultures, argues for an external "nurture" basis for concepts—even high-level concepts such as disease. Prototype theory, in its methods, seeks evidence for cultural or even transcultural norms for categories. In the extreme, prototype theory might be viewed as empirical evidence for a position that concepts and categories are derived entirely from universals in the environment, a position more extremely nurture-oriented than any we have considered except the positions of Locke and Hume. Another psychological theory of concept formation, exemplar theory, while still holding to the implicit view that the concepts we learn reflect an external reality, is much more modest about the universality of such concepts. In this perspective, we are able to identify a member of a class or a concept, not because of any internal rules or because the sum of our experience has created prototypes of the class that are available for analysis and introspection, but because we have, in any category (dogs, chairs, diseases, sports cars), an innumerable number of instances of the category (my dog, Rover, Lassie, etc.). When we are faced with a categorization task, a first line of defense is a search through memory for similar examples of the class, and then, if we find an example that is sufficiently similar, we assume the new beast is also a dog. This description makes the process sound far more deliberate and available for introspection than the evidence suggests. Instead, if we inquire why a person decided that the new beast was a golden retriever, the new car was an Audi, or the skin lesion was actinic keratosis, the modal response woul

Referência(s)
Altmetric
PlumX