Clinical Trials in Cardiovascular Medicine
2001; Lippincott Williams & Wilkins; Volume: 103; Issue: 21 Linguagem: Catalão
10.1161/01.cir.103.21.e101
ISSN1524-4539
Autores Tópico(s)Health and Medical Research Impacts
ResumoHomeCirculationVol. 103, No. 21Clinical Trials in Cardiovascular Medicine Free AccessOtherPDF/EPUBAboutView PDFView EPUBSections ToolsAdd to favoritesDownload citationsTrack citationsPermissions ShareShare onFacebookTwitterLinked InMendeleyReddit Jump toFree AccessOtherPDF/EPUBClinical Trials in Cardiovascular Medicine Elliott M. Antman Elliott M. AntmanElliott M. Antman From the Cardiovascular Division, Brigham and Women's Hospital, Boston, Mass. Originally published29 May 2001https://doi.org/10.1161/01.CIR.103.21.e101Circulation. 2001;103:e101–e104The practice of cardiology is increasingly driven by evidence-based medicine centering around the results of clinical trials.1 In this inaugural article of Clinical Cardiology: Physician Update, I will focus on the design, analysis, and interpretation of clinical trials in cardiovascular medicine. Armed with the tools described herein, clinicians will be better equipped to understand and synthesize the results of the multitude of clinical trials appearing in the cardiology literature. A major goal of this effort is to shorten the delay between the publication of trial results and the translation of their findings into clinical practice.Design of Clinical TrialsTrials of new therapies in cardiology typically compare the new treatment to a control. The control group receives the treatment against which the test intervention is being compared. Control and test treatments must be both medically justifiable and compatible with the healthcare needs of study patients. Either treatment must be acceptable to study patients and to the physicians administering them; there must be a reasonable doubt regarding the efficacy of the test treatment; and there should be reason to believe that the benefits will outweigh the risks of treatment. When the control treatment is a placebo, the trial is referred to as a placebo-controlled trial.2 Given the burgeoning supply of new treatments in the cardiovascular armamentarium, more and more trials compare the test therapy to a standard therapy. This is referred to as an active controlled trial.34Randomized controlled trials typically involve the randomization of patients to either the control or test treatment (ie, randomized concurrent control). These trials are the gold standard for evaluating new therapies, and they form the foundation for the highest level of recommendations in practice guideline documents that stress evidence-based medicine5 (Figure 1). Randomization has 3 important influences that explain why it is considered the standard for trial design: (1) it reduces the likelihood of patient selection bias that may occur either consciously or unconsciously; (2) it enhances the likelihood that comparable groups of subjects are compared, especially if the sample size is sufficiently large; and (3) it validates the use of common statistical tests such as the χ2 test for comparison of proportions and Student's t test for comparison of means.6A stratified randomization scheme is typically used when it is considered important to achieve a balance of key baseline characteristics (eg, location of infarction) among the treatment groups. The ratio of randomization to the control and test therapies may be equal or unbalanced, allowing investigators to acquire more information about the test therapy (eg, risk of intracranial hemorrhage with a new fibrinolytic) while still maintaining a comparison with a control group. Because many clinical trials in cardiology are multicenter in nature, an attempt is made to achieve balance between the treatment assignment at each enrolling center by constraining the randomization to the treatment groups so that the desired ratio occurs in blocks of patients (eg, every 6 or 8 patients) at each center.When the investigator selects the subjects to be allocated to the control or treatment groups, the study is referred to as a nonrandomized, concurrent control trial.7 Unlike randomized controlled trials, such trials are subject to bias because it may be difficult for investigators to match the test and control groups adequately. Clinical trials using historical controls compare a test intervention to data obtained earlier in a nonconcurrent, nonrandomized control group.7 The use of historical controls allows clinicians to offer potentially beneficial therapies to all subjects, thereby reducing the sample size for the study. The major drawbacks are bias in the selection of the control population and potential failure of historical controls to reflect contemporary diagnostic criteria and current treatment regimens for the disease under study.Other trial designs useful for evaluating cardiovascular treatments are the crossover design and withdrawal study. The crossover design is actually a special case of the randomized controlled trial in which each subject serves as his or her own control.7 Comparisons are made between the treatment response seen during the first treatment to which the patient is allocated and during subsequent treatments. In withdrawal studies, patients with a chronic cardiovascular condition are taken off therapy and a comparison is made between the treatment effect observed on therapy with that observed off therapy. Potential limitations of withdrawal studies are that only patients who have tolerated the test intervention for a period of time are eligible for enrollment and changes in the natural history of the disease may influence the response to withdrawal of therapy.Given the multitude of drugs administered simultaneously to patients with cardiovascular diseases, an increasingly important area of investigation is drug interactions. Trials that test ≥2 therapies simultaneously typically use a factorial design. Such trials are most appropriate when there is thought to be no interaction between the test treatments. If such interactions are known to exist or are discovered to exist, it is important to evaluate each test treatment against a control treatment.To minimize the possibility of bias, blinding (sometimes referred to as masking) of treatment assignment is used. When only the patient is unaware of the treatment assignment, the trial is single blind. If the investigator is also blinded, the trial is double blind.The responsibility for monitoring evolving efficacy and safety on an interim basis during the conduct of the trial rests with a Data Safety Monitoring Board or Committee.8 It is usual practice to prespecify the level of statistical evidence the Data Safety Monitoring Board might observe at an interim analysis that might lead to a recommendation of premature discontinuation of the trial because of overwhelming evidence of benefit or harm from the test therapy (eg, stopping boundaries). Investigators and sponsors sometimes also prespecify a calculation of the conditional probability of the trial achieving its objective(s) given the observations at a particular interim look (eg, when half the expected number of patients have been enrolled or half the expected number of events have occurred). Potential recommendations that the Board might make include increasing the sample size if the event rate is lower than expected to increase the power of the study or discontinuation of the study for futility.9Statistical ConsiderationsA formal statement of the objective of a clinical trial is contained within the prespecified null and alternative hypotheses. In a superiority trial, the null hypothesis states that the test and control therapies are equally effective. The alternative hypothesis states that the event rate is lower in patients receiving the test therapy than in those receiving control therapy. The type I error (α=false-positive statement that erroneously declares there is a difference between the treatments) is typically 2-sided and set at the 5% level. The type II error (β=false-negative statement that erroneously declares there is no difference between the treatments) is typically set between 10% and 20%, such that the power of the trial (1-β) is between 90% and 80%, respectively. The combination of α and β errors determines the sample size of the trial.Fortunately, the mortality rate from cardiovascular illnesses continues to decline. The implication of a low event rate in the control group is that tens of thousands of patients must be randomized to show a difference between treatments.10 Many trials in cardiology therefore use a composite end point, such as death plus nonfatal myocardial infarction, to maintain a practical size for the trial. When interpreting the results of trials with composite end points, it is important for clinicians to note whether the direction and magnitude of the treatment effect is similar for each of the elements of the end point.In cardiovascular therapeutics, several efficacious treatments may coexist for a given condition. Potential differences in clinically important areas such as tolerability, ease of administration, and cost may lead investigators to perform a trial demonstrating therapeutic equivalence of 2 treatments. Because it is not possible to show that 2 active therapies are completely equivalent without a trial of infinite sample size, investigators specify a value (Δ) and consider the test therapy equivalent to the standard therapy if, with a high degree of confidence, the true difference in treatment effects is less than Δ.11 In this case, the null hypothesis states that the rate of events in patients receiving the test therapy exceeds the rate in patients receiving control therapy by Δ. The alternative hypothesis states that the rate of events in patients receiving the test therapy is less than the rate in patients receiving control therapy plus Δ. In a classical equivalence trial, if the effects of the 2 treatments differ by more than the equivalence margin (ie, Δ) in either direction, then equivalence is said not to be present. In practical terms, the usual objective in clinical trials of 2 active therapies is to establish that the new therapy is not worse than the standard therapy by more than Δ.3 Such one-sided comparisons are referred to as noninferiority trials (Figure 2). The new therapy may satisfy the definition of noninferiority but, depending on the results, may or may not actually show superiority compared with the standard therapy. The α and β errors of the noninferiority trial determine the sample size just as for superiority trials.It is important that investigators prespecify the noninferiority margin before learning the trial results to avoid the bias that might be introduced by retrofitting a noninferiority margin such that the test therapy satisfies the definition of noninferiority. Specification of the appropriate margin or Δ is a challenging area involving the desire of regulatory authorities to be assured that the test therapy is at least superior to placebo and the desire of clinicians to set Δ at a clinically meaningful difference between treatments.Critical Readings of Clinical TrialsBy asking 3 main sets of questions, such as those in the Table, physicians can integrate the information in articles describing clinical trials into their own practice.1213Because many clinical trials in cardiology involve a comparison of the event rates in 2 groups of patients, those allocated to control and test therapies, it is convenient to summarize the data in a 2×2 table, such as that in Figure 3. The event rates in the groups are compared using a χ2 test or Fisher's exact test to determine the statistical significance of the difference in event rates.14Several different statements can be constructed to describe the treatment effect. The degree of imprecision of the estimate of the treatment effect is typically presented in the form of 95% confidence intervals. Convenient terms used in reporting clinical trial results are the relative risk and odds ratio. As the rate increases in the group allocated to control, the odds ratio deviates farther from the relative risk, and clinicians should rely more on the latter.It is important to scrutinize all statements describing the treatment effect to gain a comprehensive picture of the magnitude of the observation and its implications for clinical practice. If practitioners are given clinical trial results only in the form of relative risk reduction, they tend to perceive a greater effectiveness of the test intervention than if a more comprehensive statement is provided that includes the absolute risk difference and the number of patients who need to be treated to prevent one event.15Against the benefits associated with a test therapy, clinicians must weigh the risks associated with its use. Terms that describe the harmful effects of test therapies include the absolute risk increase (the absolute increase in events with the test therapy compared with control therapy) and the number needed to harm (1/absolute risk increase). The composite of benefit and harm has sometimes been expressed as net clinical benefit. An example is the composite of lives saved with fibrinolytic therapy for ST elevation myocardial infarction plus the number of patients who survive but suffer a severe disabling stroke from intracranial hemorrhage.16When weighing the evidence from a single clinical trial for a treatment decision in an individual patient, physicians must consider more than the level of significance of the findings.17 A judgment must be made about whether the patient is representative of the type of patients enrolled in the trial. Additional data from other related trials should be incorporated in the decision-making process. Pooling of trial data and synthesis of the information in the form of a meta-analysis, if available, may be helpful.18 The complex interplay of benefit, harm, and cost are best analyzed with the techniques of decision analysis and cost-effectiveness analysis, and clinicians should familiarize themselves with such analyses for the therapy of interest if they are available.Download figureDownload PowerPoint Figure 1. Basic structure of classic randomized controlled trial. Patients who fulfill the enrollment criteria are randomly allocated to receive either treatment A (test therapy) or treatment B (control therapy). A stratified randomization scheme may be used to balance key baseline characteristics. In multicenter trials, the randomization scheme is usually balanced to provide the desired ratio of treatment assignments in blocks of patients at each center. Ideally, a double-blind design is used in which neither the patient nor the investigator knows the treatment to which the patient has been allocated. Rx indicates treatment.Download figureDownload PowerPoint Figure 2. Example of design and interpretation of noninferiority trials. The prespecified zone of noninferiority is usually based on prior trials comparing the standard drug to placebo. In the example shown, the difference in event rates between the test drug and standard drug is plotted with a depiction of the point estimates (black squares labeled A through F) and the 2-sided 95% confidence intervals. Trial A has confidence intervals that fall entirely to the left of zero and are consistent with superiority of the test drug compared with the standard drug. Although the point estimates for trials B and C are different, the upper bound of the confidence interval (1-sided) falls within the zone of noninferiority, allowing investigators to claim that the test drug is "not inferior" to the standard drug and, for practical purposes, may be considered therapeutically similar. Trials D and E do not satisfy the criteria for noninferiority because the upper bound of the confidence intervals falls to the right of the right edge of the zone of noninferiority. The findings of trial E are actually suggestive of inferiority of the test drug compared with the standard drug. Trial F is an inconclusive study due to its small sample size and extremely wide confidence intervals, preventing investigators from claiming either superiority or noninferiority of the test drug compared with the standard drug.Download figureDownload PowerPoint Figure 3. Evaluation of a clinical trial. In this example, 10 000 patients meeting enrollment criteria for the randomized controlled trial are randomized such that 5000 patients receive treatment A and 5000 patients receive treatment B. A total of 600 patients in group A experience an event (eg, mortality), yielding an event rate of 12%; 750 patients in group B experience an event, yielding an event rate of 15%. The 2×2 table on the right is then constructed, and various statistical tests are performed to evaluate the significance of the difference in event rates between groups A and B (RA and RB, respectively). Common statements describing the treatment effect are relative risk (of events in group A versus group B), odds ratio (for development of events in group A versus group B), or the absolute risk difference (of events in group A versus group B) using the formulae shown. A clinically useful method of expressing the results is to calculate the number of patients that need to be treated to prevent one event. Adapted from Figure 10-6 of Antman E. Overview of medical therapy. In: Califf R, ed. Acute Myocardial Infarction and Other Acute Ischemic Syndromes. Philadelphia: Current Medicine; 1995. Table 1. Questions to Ask When Reading and Interpreting the Results of a Clinical TrialAre the results of the study valid?Primary guides 1. Was the assignment of patients to treatment randomized? 2. Were all patients who entered the trial properly accounted for at its conclusion? 3. Was follow-up complete? 4. Were patients analyzed in the groups to which they were randomized?Secondary guides 5. Were patients, their clinicians, and study personnel "blind" to treatment? 6. Were the groups similar at the start of the trial? 7. Aside from the experimental intervention, were the groups treated equally?What were the results? 8. How large was the treatment effect? 9. How precise was the treatment effect?Will the results help me in caring for my patients? 10. Does my patient fulfill the enrollment criteria for the trial? If not, how close is my patient to the enrollment criteria? 11. Does my patient fit the features of a subgroup in the trial report? If so, are the results of the subgroup analysis in the trial valid? 12. Were all the clinically important outcomes considered? 13. Are the likely treatment benefits worth the potential harm and costs?Adapted from the data in References 12 and 13.FootnotesCorrespondence to Elliott M. Antman, MD, Cardiovascular Division, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115. E-mail [email protected] References 1 Woolf SH. The need for perspective in evidence-based medicine. JAMA.1999; 282:2358–2365.CrossrefMedlineGoogle Scholar2 Meinert C. Clinical trials: design, conduct, and analysis. In: Lilienfeld A, ed. Monographs in Epidemiology and Biostatistics. Vol. 8. New York: Oxford University Press; 1986:65–70.Google Scholar3 Temple R, Ellenberg SS. Placebo-controlled trials and active-control trials in the evaluation of new treatments, part 1: ethical and scientific issues. Ann Intern Med.2000; 133:455–463.CrossrefMedlineGoogle Scholar4 Ellenberg SS, Temple R. Placebo-controlled trials and active-control trials in the evaluation of new treatments, part 2: practical issues and specific cases. Ann Intern Med.2000; 133:464–470.CrossrefMedlineGoogle Scholar5 Braunwald E, Antman EM, Beasley JW, et al. ACC/AHA guidelines for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on the Management of Patients With Unstable Angina). J Am Coll Cardiol.2000; 36:970–1062.CrossrefMedlineGoogle Scholar6 Friedman L, Furberg C, DeMets D. Fundamentals of Clinical Trials. 3rd ed. Littleton: PSG Publishing; 1998:41–60.Google Scholar7 Food and Drug Administration. International conference on harmonization: choice of control group in clinical trials. Federal Register.1999; 64:51767–51780.Google Scholar8 DeMets DL, Pocock SJ, Julian DG. The agonizing negative trend in monitoring of clinical trials. Lancet.1999; 354:1983–1988.CrossrefMedlineGoogle Scholar9 Ware J, Muller J, Braunwald E. The futility index: an approach to the cost-effective termination of randomized clinical trials. Am J Med.1985; 78:635–643.CrossrefMedlineGoogle Scholar10 Collins R, MacMahon S. Reliable assessment of the effects of treatment on mortality and major morbidity, I: clinical trials. Lancet.2001; 357:373–380.CrossrefMedlineGoogle Scholar11 Ware JH, Antman EM. Equivalence trials. N Engl J Med.1997; 337:1159–1161.CrossrefMedlineGoogle Scholar12 Guyatt GH, Sackett DL, Cook DJ. The medical literature. users' guides to the medical literature, II: how to use an article about therapy or prevention, A: are the results of the study valid? JAMA.1993; 270:2598–2601.CrossrefMedlineGoogle Scholar13 Guyatt GH, Sackett DL, Cook DJ. the medical literature. users' guides to the medical literature, II: how to use an article about therapy or prevention, B: what were the results and will they help me in caring for my patients? JAMA.1994; 271:59–63.CrossrefMedlineGoogle Scholar14 Glantz S. Primer of Biostatistics. 3rd ed. New York: McGraw-Hill; 1992:110–154.Google Scholar15 Bucher H, Weinbacher M, Gyr K. Influence of method of reporting study results on decision of physicians to prescribe drugs to lower cholesterol concentration. BMJ.1994; 309:761–764.CrossrefMedlineGoogle Scholar16 The Global Use of Strategies to Open Occluded Coronary Arteries (GUSTO III) Investigators. A comparison of reteplase with alteplase for acute myocardial infarction. N Engl J Med.1997; 337:1118–1123.CrossrefMedlineGoogle Scholar17 Myerburg RJ, Mitrani R, Interian A, et al. Interpretation of outcomes of antiarrhythmic clinical trials: design features and population impact. Circulation.1998; 97:1514–1521.CrossrefMedlineGoogle Scholar18 Lau J, Ioannidis JP, Schmid CH. Summing up evidence: one answer is not always enough. Lancet.1998; 351:123–127.CrossrefMedlineGoogle Scholar Previous Back to top Next FiguresReferencesRelatedDetailsCited ByBello N, Claggett B, Desai A, McMurray J, Granger C, Yusuf S, Swedberg K, Pfeffer M and Solomon S (2014) Influence of Previous Heart Failure Hospitalization on Cardiovascular Events in Patients With Reduced and Preserved Ejection Fraction, Circulation: Heart Failure, 7:4, (590-595), Online publication date: 1-Jul-2014. Kneyber M, van Woensel J, Uijtendaal E, Uiterwaal C and Kimpen J (2007) Azithromycin does not improve disease course in hospitalized infants with respiratory syncytial virus (RSV) lower respiratory tract disease: A randomized equivalence trial, Pediatric Pulmonology, 10.1002/ppul.20748, 43:2, (142-149), Online publication date: 1-Feb-2008. Antman E, Califf R and Kupersmith J (2007) Tools for Assessment of Cardiovascular Tests and Therapies Cardiovascular Therapeutics, 10.1016/B978-1-4160-3358-5.50007-3, (1-34), . Hilton J (2006) Designs of Superiority and Noninferiority Trials for Binary Responses are Noninterchangeable, Biometrical Journal, 10.1002/bimj.200510288, 48:6, (934-947), Online publication date: 1-Dec-2006. Muni N, Califf R, Foy J, Boam A, Zuckerman B and Kuntz R (2005) Coronary drug-eluting stent development: Issues in trial design, American Heart Journal, 10.1016/j.ahj.2004.09.001, 149:3, (415-433), Online publication date: 1-Mar-2005. May 29, 2001Vol 103, Issue 21 Advertisement Article InformationMetrics Copyright © 2001 by American Heart Associationhttps://doi.org/10.1161/01.CIR.103.21.e101 Originally publishedMay 29, 2001 Keywordsdrugsstatisticscost-benefit analysismyocardial infarctiontrialsPDF download Advertisement
Referência(s)