Artigo Acesso aberto Revisado por pares

Back to the Future

2005; Wolters Kluwer; Volume: 79; Issue: 9 Linguagem: Inglês

10.1097/00007890-200505150-00007

ISSN

1534-6080

Autores

Thomas E. Starzl,

Tópico(s)

Hematopoietic Stem Cell Transplantation

Resumo

Between the ICTS Congress in Miami and this one in Vienna, we have passed the half-century mark of what has been considered the modern era of clinical transplantation. All too often, the starting point of an era is arbitrary and decided upon by someone who believes the dawn to be the moment of his or her arrival. With transplantation, however, there seems to be little argument that the beginning should be dated to 1953. The following 51 years can be divided into four distinct phases. The protracted birth of clinical transplantation took place during the 15 years of Phase 1. Phase 1 The birth began between 1953 and 1956 with the demonstration that neonatal mice (1) and irradiated adult mice (2) developed donor specific tolerance after successful alloengraftment of splenic and bone marrow cells. Because a good histocompatibility match was required for avoidance of graft versus host disease (GVHD), clinical application of hematolymphopoietic cell transplantation had to await discovery of the HLA antigens. When this was accomplished (3), the successfully-treated human bone marrow recipients of 1968 were oversized versions of the tolerant chimeric mice. The clinical bone marrow transplant breakthrough of 1968 (4–6) signaled the end of Phase 1. In the meanwhile, all of the major struts of clinical organ transplantation had been put in place: immunosuppression, preservation, tissue matching, and the complex surgical technology. In fact, kidney transplantation, which was first accomplished in humans nearly a decade before clinical bone marrow transplantation (7,8), already was an established clinical service by 1968, albeit a flawed one. And in addition, the first long survivals had been recorded after liver (9) and heart (10) transplantation. All of this had been accomplished in the ostensible absence of leukocyte chimerism, without HLA matching and with no evidence of graft versus host disease. Two unexplained features of the alloimmune response had made it feasible to forge ahead precociously with organ transplantation. The first was the demonstration that organ rejection is highly reversible. The second observation was that an organ allograft, if protected by nonspecific immunosuppression, can induce variable specific tolerance. The tolerogenic quality of an organ allograft was observed for the first time in any species in 1959 in 2 fraternal twin kidney recipients, the first in Boston (7) and the second in Paris (8). The patients had been conditioned with sublethal total body irradiation prior to transplantation. Both renal allografts functioned for more than two decades without a need for maintenance drug therapy which was, in fact, not yet available. A similar drug-free state was next occasionally observed after kidney transplantation, and more frequently after liver replacement, in mongrel dogs who were treated with a single immunosuppressive agent: 6 mercaptopurine (6-MP), azathioprine, prednisone, or ALG. After treatment was stopped, rejection did not develop for long periods in some animals. Such results were at first exceedingly rare—no more than 1 or 2% of the canine kidney experiments done under 6-MP and azathioprine up to the end of 1962. However, it was suspected by this time that rejection, its reversal, and immunosuppression-assisted organ engraftment was a form of partial tolerance. This view was crystallized by the human experience summarized in the title of a report in 1963 of a series of live donor kidney recipients treated in Denver: “The reversal of rejection in human renal homografts with subsequent development of homograft tolerance” (11). The patients had been treated with azathioprine, adding large doses of prednisone to treat rejections that were monitored by serial testing of serum creatinine (Fig. 1A). Although rejection occurred in almost every case, these were usually reversible. The one-year survival of 46 allografts from familial donors in 1962–63 was an unprecedented 75%. While most of the 25% loss was due to uncontrolled acute rejection, the development of partial tolerance in many of the survivors was inferred from the rapidly declining need for treatment after rejection reversal. In fact, nine (19%) of the 46 familial allografts transplanted in Denver during this period functioned for the next four decades. Moreover, all immunosuppression eventually was stopped in seven of these nine patients without rejection, for periods ranging from 6 to 40 years. Eight of the nine patients are still alive and bear the longest surviving organ allografts in the world (12).FIGURE 1.: The role of immunosuppression in deletional tolerance. Exhaustion and deletion of the antigraft response occurs without any treatment in spontaneous tolerance models (usually liver transplantation). (A) In normally rejecting models, the recipient response may be kept in the deletable range with just the right daily doses of minimal immunosuppression (gray bar), permitting the development of variable tolerance. Because accurate prediction of the “right dose” in the outbred human population is not possible, dose maneuverability is required. (B) Prophylactic posttransplant overimmunosuppression with multiple drugs (layered bars) with variable subversion of clonal activation ➜ exhaustion ➜ deletion. (C) Reduction of the anticipated antidonor response into a deletable range by lowering global immune reactivity before exposure to alloantigen with pretransplant cytoablation or cytoreduction. (D) Combined use of the therapeutic principles depicted in (A) and (C).What was the connection between the tolerant mice, the irradiated fraternal twin kidney recipients in Boston and Paris, the canine organ recipients in whom treatment could be stopped, and the unique cluster of ultimately drug-free human kidney recipients in Colorado? The mystery deepened with the demonstration in 1966 in France (13) and England (14) that the liver can be transplanted in about 20% of outbred pigs without any treatment at all. None of the organ recipients, whether off or on maintenance immunosuppression, were thought to have donor leukocyte chimerism. Thus, organ transplantation became disconnected at a very early time from the scientific anchor of leukocyte chimerism that had been established by the mouse models, and was soon to be exemplified by human bone marrow transplantation. The resulting intellectual separation of the two kinds of transplantation was an unchallenged legacy of Phase 1 passed on from generation to generation ever since. There was another dark legacy of Phase 1. This was a modified version of the treatment strategy developed with azathioprine and prednisone. The principal changes from the original protocol are shown in Figure 1B. Instead of supplementing baseline immunosuppression only when needed as shown in Figure 1A, large prophylactic doses of prednisone were administered from the time of transplantation. This was an instinctive reaction to the loss of grafts to rejections that could not be reversed. With this change, the incidence of acute rejection was greatly reduced. More than 35 years passed before the long term immunologic consequences of the modifications were realized. Thus, by 1968 the foundation, as well as the essential superstructure of clinical transplantation as we know it today, had been established. Not surprisingly, all 19 recipients to date of the Medawar prize served as midwives during the 15-year birth period of 1953–1968. The role of 12 of the 19 laureates in the ascension of the tissue matching-dependent bone marrow transplantation from mouse to man (Table 1, left) is easy to identify. By going beyond the boundaries established by the mouse models, the seven contributors to organ transplantation (Table 1, right) had wandered into a new conceptual universe.TABLE 1: Medawar LaureatesPhase 2 Throughout the succeeding Phase 2 that began in 1969, immunosuppression for organ transplantation was based on azathioprine, and in most centers, prophylactic high doses of prednisone or posttransplant antilymphocyte globulin (ALG) which had been introduced clinically in 1966. It was a bleak period. In the view of critics, the heavy mortality, and particularly the devastating morbidity caused by steroid dependence, made organ transplantation (even of kidneys) as much a disease as a treatment. Most of the liver and heart transplant programs that had been established in the late 1960s, in an initial burst of optimism, closed down. But in the few that remained, a trickle of long-surviving liver and heart recipients bore witness to what some day would be accomplished on a grand scale. For example, a woman now in her 36th posttransplant year was 4 years old at the time of liver replacement in Denver for biliary atresia and a hepatoma. She is the longest surviving recipient in the world of an extrarenal organ. Phase 3 In fact, what had appeared to be the sunset of extrarenal transplantation was only the dawn of Phase 3. Phase 3 began with the clinical introduction of cyclosporine (15), followed a decade later by tacrolimus (16). These drugs were associated with stepwise improvements with all kinds of organ transplantation. But their impact was most conclusively demonstrated with liver and heart transplantation. As new agents became available, they were simply folded into the modified formula of heavy prophylactic immunosuppression that had been inherited from Phases 1 and 2 (Fig. 1B). Used in this way, the better drugs fueled the golden age of transplantation of the 1980s and early 1990s. Acute rejection had become almost a nonproblem. However, the unresolved issues now were chronic rejection, risks of long term immunodepression per se and drug-specific side effects. It was clear that amelioration of these problems would require, as a first step, elucidation of the mechanisms of alloengraftment and of acquired tolerance. Phase 4 An intensified search for these immunologic mechanisms became the theme of our current Phase 4, which began in earnest about a dozen years ago. Until this time, organ engraftment had been attributed to mechanisms that did not involve either the presence or a role of donor leukocyte chimerism. It was known that organs contain large numbers of passenger leukocytes, and that these donor cells were largely replaced in the successful transplanted allograft by recipient leukocytes (Fig. 2A). However, the missing donor cells were thought to have undergone immune destruction with selective sparing of the specialized parenchymal cells. Conversely, the ideal result after bone marrow transplantation was generally perceived as complete replacement of recipient immune cells (i.e., total hematolymphopoietic chimerism; Fig. 2B).FIGURE 2.: Old (A and B) and new views (C and D) of transplantation recipients. (A) The early conceptualization of immune mechanisms in organ transplantation in terms of a unidirectional host versus graft (HVG) response. Although this readily explained organ rejection, it limited possible explanations of organ engraftment. (B) Mirror image of (A) depicting the historical view of successful bone marrow transplantation as a complete replacement of the recipient immune system by that of the donor, with the potential complication of an unopposed lethal unidirectional graft versus host (GVH) response (i.e., rejection of the recipient by the graft.) (C) Our current view of bidirectional and reciprocally modulating immune responses of coexisting immune competent cell populations. Because of variable reciprocal induction of deletional tolerance, organ engraftment was feasible despite a usually dominant HVG reaction. The bone silhouette in the graft represents passenger leukocytes of bone marrow origin. (D) Mirror image of (C) after successful bone marrow transplantation. Recipient cytoablation has caused a reversal of the size proportions of the donor and recipient populations of immune cells. Reprinted with permission from (31).A flaw in this historical dogma began to be exposed in the early 1990s with the puzzling observation in Seattle that there always was a small residual population of recipient hematolymphopoietic cells in patients previously thought to have complete bone marrow replacement (Fig. 2D) (17). When a similar small population of donor leukocytes (i.e., microchimerism) was discovered with sensitive detection techniques in 1992 in long-surviving human recipients of functioning organ allografts (Fig. 2C), it was evident that organ engraftment and bone marrow cell engraftment were mirror image versions of leukocyte chimerism (18–20). The microchimerism was demonstrated in the blood, lymph nodes, skin, or other tissues of all 30 liver or kidney recipients studied up to three decades after transplantation. The donor hematolymphopoietic cells were of different lineages including dendritic cells. The peripheralized donor leukocytes obviously were progeny of migratory donor precursor or pluripotent hematolymphopoietic stem cells that were a normal constituency of whole organs. From the biopsy findings and from voluminous supporting data, we concluded that organ engraftment had resulted from “.…responses of co-existing donor and recipient cells, each to the other, causing reciprocal clonal exhaustion, followed by peripheral clonal deletion” (18,19). The host response was the dominant one in most cases of organ transplantation, but with the occasional exception of GVHD (Fig. 3). In the conventionally treated bone marrow recipient, host cytoablation simply transferred immune dominance from the host to the graft, explaining the high risk of GVHD. All of the major differences between the two kinds of transplantation were caused by the recipient cytoablation (18–20).FIGURE 3.: Contemporaneous host-versus-graft (HVG) (upright curves) and graft-versus-host (GVH) (inverted curves) responses following organ transplantation. If some degree of reciprocal clonal exhaustion is not induced and maintained (usually requiring protective immune suppression), one cell population will destroy the other. In contrast to the usually dominant HVG reaction of organ transplantation (shown here), the GVH reaction usually is dominant in the cytoablated bone marrow recipient. Therapeutic failure with either type of transplantation implies the inability to control one, the other, or both of the responses. Reprinted with permission from (24).Although this explanation of alloengraftment was congruent with essentially all previously enigmatic observations in experimental and clinical models of transplantation, it was, at first, highly controversial. The criticisms were dampened in the mid-1990s with the demonstration by Zinkernagel that the different carrier states caused by spreading noncytopathic microorganisms represented various levels of similar deletional tolerance (21–23). After agreeing that the kinetics and mechanisms of infection tolerance were essentially the same as those of alloengraftment, Zinkernagel and I undertook a review in which the analogies between the numerous clinical scenarios of transplantation and those of infectious diseases were described (24). We also proposed that the migration and localization of antigen are the principal factors governing immunologic responsiveness or unresponsiveness, no matter what the antigen. One key tenet of this immunoregulatory paradigm is that the presence of antigen that fails to reach lymphoid destinations is not recognized (immune ignorance). The other is that clonal exhaustion-deletion is the seminal mechanism of acquired tolerance. The existence and importance of immune ignorance (25,26) and of clonal exhaustion-deletion (27,28) were formally proved in the 1990s. After organ transplantation, the prompt recognition of alloantigen is assured when the passenger leukocytes of the graft simulate the hematogenous spread of noncytopathic microorganisms (e.g., the hepatitis viruses) and migrate preferentially to host lymphoid organs (Fig. 4, left). There they induce a cytolytic T cell response before disseminating more ubiquitously (Fig. 4, right). Cells that reach protected nonlymphoid niches may subsequently migrate back to host lymphoid organs and maintain the deletional state induced at the outset. Alternatively, these donor cells may perpetuate alloimmunity in the same way as residual microorganisms sustain protective immunity: that is, below some threshold, microchimerism may be responsible for the high PRA or other evidence of sensitization that frequently develops after unsuccessful transplantation.FIGURE 4.: Initial preferential migration of passenger leukocytes from organ allografts to host lymphoid organs (left), where they induce a donor-specific immune response. After about 30 days, many of the surviving cells move on to nonlymphoid sites (right). Migration from these privileged locations back into the lymphoid compartment may perpetuate the exhaustion-deletion induced at the outset, or alternatively, initiate (or maintain) alloimmunity. Reprinted with permission from (32).How could this insight be exploited clinically? This was considered in a second review (29). The window of opportunity for the clonal deletion that results in the collapse of the immune response shown in Figure 1A and Figure 3 is open only for the first few posttransplant weeks of maximal donor leukocyte migration. It was apparent that the window could be closed by excessive postoperative immunosuppression (Fig. 1B). With later reduction of the initial over immunosuppression, recovery of the inefficiently deleted clone would lead to the delayed acute rejection, or the chronic rejection, that were being seen in the transplant clinics (Fig. 1B). Even in the best case scenario, the patients would be predestined to lifetime dependence on immunosuppression. However, too little immunosuppression would result in uncontrolled rejection. In 2001, it was suggested that this dilemma could be addressed by adherence to the two historically rooted therapeutic principles shown on Figure 1: recipient pretreatment and minimalistic posttransplant immunosuppression (29). Minimal immunosuppression alone (Fig. 1A) can allow the development of tolerance, but it is difficult to use this approach in the heterogeneous outbred human population. With pretreatment (Fig. 1B), the recipient’s global immune responsiveness is reduced before exposure to donor antigen, thereby lowering the anticipated donor-specific response into a more readily delectable range (Fig. 1C). This apparently is what had been accomplished with sublethal irradiation alone in the ground-breaking fraternal twin cases of 1959 (7,8). In fact, pretreatment by recipient cytoablation became the essential therapeutic step in conventional bone marrow transplantation, but with the penalty of graft versus host disease even with an HLA-matched donor. Irradiation and other cytoablation methods were too dangerous and too restrictive because of the prerequisite HLA matching to be used for organ transplantation. However, less drastic lymphoid depletion by ALG or other well known measures (the so-called nonmyeloablative approach) has been repeatedly shown to be effective without causing GVHD. Consequently, we suggested that pretreatment with one of today’s potent antilymphoid antibody preparations combined with just the right amount of post-transplant immunosuppression would allow the preemptively weakened clonal activation to proceed efficiently to clonal deletion (Fig. 1D). The ultimate objective was to reduce long term dependence on maintenance therapy. This was not a new idea. Precisely this strategy was extensively tested in the late 1960s and was one of the principal topics of a CIBA Foundation symposium in January 1967 (30) which was attended by several members of today’s audience. However, the strategy could not be efficiently applied with a baseline agent as weak as azathioprine. Moreover, it had been developed empirically without an understanding of the mechanisms of alloengraftment. With elucidation of engraftment mechanisms, and armed with today’s better drugs, definitive studies of variations of the lymphoid depletion strategy are ongoing at several centers. Patient and graft survival are the parameters of greatest immediate interest. However, these trials constitute formal tests of immunologic hypotheses and thus should help bring to closure long-standing disputes about the biologic meaning and mechanisms of acquired tolerance. Although our view is that clonal exhaustion-deletion and immune ignorance are the seminal mechanisms of allotolerance, and that both are regulated by the migration and localization of leukocytes (24,29), other proposed primary or accessory mechanisms may play a role. Singly or together these alternative immunoregulatory mechanisms may be important. Intense efforts are currently being made to determine if, or to what extent, they can be used to design or guide clinical care. Such research is well represented on the congress program. Thus, this 20th Congress of our Society may well be remembered as the beginning of the end of Phase 4. If so, it is fitting that the page should be turned to a new chapter in Vienna, where 102 years ago, Emerich Ullmann reported to the Vienna Medical Society the transplantation of kidney allografts into the neck of dogs. These first attempts in history to transplant an organ in any species were celebrated in the centennial symposium organized here in 2002 by our host, Raimund Margreiter. What lies ahead in Phase 5? My predication is that completely drug-free tolerance will be largely, but not exclusively, limited to recipients of HLA-matched organs. But variable partial tolerance is there for the taking in most of the others, allowing reduced exposure to the risks of chronic immunosuppression. Xenotransplantation will have to be developed within the same immunologic framework. Here, the problem, in principle, is to create a better inter-species tissue match by transgenic modification. Although the α-Gal gene has been knocked out in pigs, it is not yet known what further changes must be made. Where stem cell biology will fit remains unknown. But it also will have to conform to the same immunologic rules. One thing seems to me certain. Our forefather and founding president, Peter Medawar, would be moved indescribably if he could see the extent to which his chimerism discoveries with Billingham and Brent (1) have been the glue seamlessly uniting not only all of experimental and clinical transplantation, but also linking transplantation to other fields of experimental and applied immunology. He also would smile if he could see that progress is now being made not so much by developing better drugs as by the better use of drugs we already have in hand.

Referência(s)