Trials and errors in clinical research
1999; Elsevier BV; Volume: 354; Linguagem: Inglês
10.1016/s0140-6736(99)90459-2
ISSN1474-547X
Autores Tópico(s)Ethics in Clinical Research
ResumoChristian Gluud specialised in internal medicine, medical gastroenterology, and hepatology. He is chief physician at the Copenhagen Trial Unit.There are two important questions to be answered when evaluating clinical research. Do I belive the data presented? Can I use the results for my patients? Avicenna (980-1037), a scientist and philosopher from the Middle East, stressed the pivotal importance of internal and external validity in clinical research almost 1000 years ago. He stated that the trial of a remedy should include reproducible observations of two opposed cases. Furthermore, experimentation must be done in human beings, because testing a drug on lions or horses might not prove anything about the effect on man. Since then the optimum research method has been developed: the randomised, double-blind clinical trial.Justice: roundel from the Tomb of the Cardinal of Portugal 1460s: Luca Della Robbia (1400–82).View Large Image Copyright © 1999 Bridgman art LibraryInternal validity refers to the restriction of bias through proper design, conduct, analyses, and presentation of research. A major advance in design was achieved with methods to avoid selection bias. In 1662, van Helmont in Flanders proposed that lots should be cast to decide which fever patients would be or would not be treated with blood letting. In 1753, Lind from England conducted his famous study of six interventions in six groups of two sailors with scurvy. Semmelweis in Vienna in the 1840s used birth statistics of pregnant women who had been assigned by the time they had been admitted to hospital to midwifery care or medical care. In the mid-19th century, Balfour examined the preventive effect of belladonna for scarletina by allocating alternate boys from a list to belladonna or no treatment. In 1896, Fibiger in Denmark prospectively allocated diphtheria patients to anti-diphtheria serum or no serum according to day of admission to create two comparable groups. These advancements increased the use of the randomised design in clinical research in the 20th century. The Medical Research Council's randomised trial from 1948 gives a clear account of the steps taken to conceal the allocation schedule.Parallel efforts to control information bias were developed. In 1794, French researchers dismissed claims that animal magnetism could ameliorate unwanted symptoms by asking blindfolded patients to assess its effects. In 1800, Haygarth in Bath dismissed the efficacy of metal tractors for rheumatism by showing similar effects of fake wood tractors. In the USA in 1884, Pierce and Jastrow used masked patients for their psychophysiological experiment involving randomisation using a deck of cards.It is also important to avoid random error in clinical research. Even small advantages of one intervention over another may be worth finding. With regard to major outcomes such as death, it is usually unrealistic to hope for large intervention effects. Hence, trials ought to be large. Small trials run a substantial risk of committing type I (unequal distribution of prognostic factors) or type II errors. Increased use and refinement of statistical methods during the 20th century has led to an increased understanding of these issues.External validity or the applicability of trial results to everyday clinical practice concern the selection of patients, intervention regimens, and outcome variables. Randomised clinical trials have been accused of low external validity, mainly because of selection of patients. Of course, summary results of trials will not apply equally to all patients. However, the randomised clinical trial, if used optimally, provides the best estimate of intervention efficacy—also for future patients. It is not the randomised trial's problem if some researchers include a too narrowly defined group of patients or if some clinicians extrapolate the results to other, non-examined patients.During the 20th century, trials still fail to use adequate methods for generation of allocation sequence, allocation concealment, and double blinding. Many trials involve far too few patients. The median number of patients randomised per trial during the past 50 years does not reach 50 patients. Too few pragmatic trials are done. Statistical methods are used improperly or inadequately. Intention-to-treat analyses are not always undertaken. The main conclusions of trials commonly rest on posthoc presentations of secondary, tertiary, or more remote outcome variables. These caveats have consequences for medical research and medical decision making. Low-quality randomised clinical trials exaggerate intervention efficacy by up to 50% compared with large high-quality randomised clinical trials. Furthermore, inadequate methods of generation of allocation sequence, allocation concealment, and double blinding lead to significantly exaggerated intervention efficacy by up to 50%.Small randomised trials of good methodological quality may, however, predict results of large trials. More recently, meta-analyses that combine the results of controlled trials have been used increasingly. Such meta-analyses, especially if done as systematic reviews, have already affected the way researchers and clinicians think and act. Christian Gluud specialised in internal medicine, medical gastroenterology, and hepatology. He is chief physician at the Copenhagen Trial Unit. There are two important questions to be answered when evaluating clinical research. Do I belive the data presented? Can I use the results for my patients? Avicenna (980-1037), a scientist and philosopher from the Middle East, stressed the pivotal importance of internal and external validity in clinical research almost 1000 years ago. He stated that the trial of a remedy should include reproducible observations of two opposed cases. Furthermore, experimentation must be done in human beings, because testing a drug on lions or horses might not prove anything about the effect on man. Since then the optimum research method has been developed: the randomised, double-blind clinical trial. Internal validity refers to the restriction of bias through proper design, conduct, analyses, and presentation of research. A major advance in design was achieved with methods to avoid selection bias. In 1662, van Helmont in Flanders proposed that lots should be cast to decide which fever patients would be or would not be treated with blood letting. In 1753, Lind from England conducted his famous study of six interventions in six groups of two sailors with scurvy. Semmelweis in Vienna in the 1840s used birth statistics of pregnant women who had been assigned by the time they had been admitted to hospital to midwifery care or medical care. In the mid-19th century, Balfour examined the preventive effect of belladonna for scarletina by allocating alternate boys from a list to belladonna or no treatment. In 1896, Fibiger in Denmark prospectively allocated diphtheria patients to anti-diphtheria serum or no serum according to day of admission to create two comparable groups. These advancements increased the use of the randomised design in clinical research in the 20th century. The Medical Research Council's randomised trial from 1948 gives a clear account of the steps taken to conceal the allocation schedule. Parallel efforts to control information bias were developed. In 1794, French researchers dismissed claims that animal magnetism could ameliorate unwanted symptoms by asking blindfolded patients to assess its effects. In 1800, Haygarth in Bath dismissed the efficacy of metal tractors for rheumatism by showing similar effects of fake wood tractors. In the USA in 1884, Pierce and Jastrow used masked patients for their psychophysiological experiment involving randomisation using a deck of cards. It is also important to avoid random error in clinical research. Even small advantages of one intervention over another may be worth finding. With regard to major outcomes such as death, it is usually unrealistic to hope for large intervention effects. Hence, trials ought to be large. Small trials run a substantial risk of committing type I (unequal distribution of prognostic factors) or type II errors. Increased use and refinement of statistical methods during the 20th century has led to an increased understanding of these issues. External validity or the applicability of trial results to everyday clinical practice concern the selection of patients, intervention regimens, and outcome variables. Randomised clinical trials have been accused of low external validity, mainly because of selection of patients. Of course, summary results of trials will not apply equally to all patients. However, the randomised clinical trial, if used optimally, provides the best estimate of intervention efficacy—also for future patients. It is not the randomised trial's problem if some researchers include a too narrowly defined group of patients or if some clinicians extrapolate the results to other, non-examined patients. During the 20th century, trials still fail to use adequate methods for generation of allocation sequence, allocation concealment, and double blinding. Many trials involve far too few patients. The median number of patients randomised per trial during the past 50 years does not reach 50 patients. Too few pragmatic trials are done. Statistical methods are used improperly or inadequately. Intention-to-treat analyses are not always undertaken. The main conclusions of trials commonly rest on posthoc presentations of secondary, tertiary, or more remote outcome variables. These caveats have consequences for medical research and medical decision making. Low-quality randomised clinical trials exaggerate intervention efficacy by up to 50% compared with large high-quality randomised clinical trials. Furthermore, inadequate methods of generation of allocation sequence, allocation concealment, and double blinding lead to significantly exaggerated intervention efficacy by up to 50%. Small randomised trials of good methodological quality may, however, predict results of large trials. More recently, meta-analyses that combine the results of controlled trials have been used increasingly. Such meta-analyses, especially if done as systematic reviews, have already affected the way researchers and clinicians think and act.
Referência(s)