Editorial Acesso aberto Revisado por pares

Should we tolerate tolerability as an objective in early drug development?

2007; Wiley; Volume: 64; Issue: 3 Linguagem: Inglês

10.1111/j.1365-2125.2007.03023.x

ISSN

1365-2125

Autores

Adam F. Cohen,

Tópico(s)

Pharmacology and Obesity Treatment

Resumo

In 1803, Friedrich Wilhelm Adam Sertürner started research on the isolation of active analgesic substances from opium. After painstaking chemical isolation he obtained crystals of an apparently pure substance. He put them into food for the mice in his cellar and for unwanted dogs in the neighbourhood; the crystals put them to sleep and killed them. Undaunted, he decided to test the drug in humans (on himself and three 17-year-old friends) ‘because experiments on animals do not give exact results’[1]. He started with what he considered to be a very low dose, but of course Sertürner only knew plant substances as medicines. They first took half a grain (30 mg) of morphine in solution, which produced flushing. After half an hour they took another 30 mg and after 15 minutes another similar dose. They developed abdominal pain and faintness and Sertürner became very sleepy. Concerned by these symptoms, he induced vomiting in all of the subjects and they eventually recovered, although one of them was quite ill and ‘spent the night in a deep sleep’. These experiments demonstrated the enormous potency of pure chemical substances on physiological functions and led to the subsequent isolation of other alkaloids (such as atropine, colchicine, codeine, and strychnine), many of which are still used in therapy today. This finding was also the start of a new discipline for studying these substances and their effects, and the first departments of pharmacology were started in Strasbourg, Edinburgh, and London. This experiment was performed many years ago, but has an uncanny resemblance to a more recent event in clinical pharmacology, in which the starting dose was chosen erroneously [2, 3]. In 1944 another fascinating experiment was performed to study the tolerability of pure d-tubocurarine. Frederick Prescott, director of clinical research at the Wellcome Research Laboratories, where the substance was isolated, decided to do an experiment on himself. He was used to doing dangerous self-experiments, and had previously injected himself with a combination of metamphetamine and morphine to see if it could be used for maintenance of blood pressure during surgery. His systolic pressure rose to 250 mmHg and he was hospitalized (he had done the experiment at home). The curare experiment was performed more carefully, with a full protocol, in the Westminster Hospital and was run by the anaesthetist Geoffrey S.W. Organe, with monitoring of blood pressure, pulse, and respiration (on a drum recorder of course). The aim was to determine the dose and to see if the drug had any analgesic properties. The ascending dose protocol started with 10 mg, after which Prescott experienced muscle weakness for about 15 minutes. A week later he was given 15 mg; the weakness was greater, but he could still swallow and cough. The dose was then increased to 30 mg, which resulted in a terrifying experience. He had respiratory paralysis when conscious, but did not have the means to communicate his distress, as he was unable to speak. The investigators went on collecting data, but Prescott started to choke on saliva and mucus. Strips of adhesive plaster were put on his legs and ripped off as a test of the analgesic properties of curare, causing him pain. He was saved by an injection of neostigmine, but the investigators never realized that he had been in extreme terror. Even so, he agreed to proceed to the last stage of the experiment, in which he received half the dose intravenously and half intramuscularly [4]. Incidentally, these stories about self experimentation come from a book by Lawrence Altman, which is essential reading for anyone involved in human experimentation. Both finding the starting dose of a new drug and its tolerability clearly have a venerable history. The question is whether this tradition should continue unchanged. The guidelines for clinical trials [5] give tolerability as an objective in early (‘Phase I’) human pharmacology trials, but there is no indication that determining this is the only or even the most important objective. Pharmacokinetics, pharmacodynamics, and even obtaining an indication of clinical effectiveness are all listed. One wonders why many, if not all, of the protocols for earlydrug studies in humans have just tolerability as the primary objective rather than some of the other objectives. Firstly, the tolerability of new medicines in single doses is not always predictive of tolerability in the clinic. For instance, rofecoxib [6, 7], tolcapone [8, 9], cerivastatin [10, 11], and ximelagatran [12, 13] all have descriptions of excellent tolerability in their early development but all were taken off the market because of serious toxicity, discovered later. This is of course neither new nor unexpected. The useful DoTS classification of adverse effects that was proposed in 2003 by Aronson and Ferner [14] may help. Adverse reactions determining tolerability may occur at doses or plasma concentrations that are too high (direct toxic effects), or occur at doses or concentrations that are therapeutic, presumably through some other mechanism (collateral effects), or at relatively low concentrations in susceptible subjects (hypersusceptibility reactions). Assuming some reasonable detection or measurement system, collateral or toxic effects can be quantified and will have a comfortable graded response and are likely to be reversible. Hypersusceptibility is much less predictable and more serious, in the sense that it may produce tissue damage that is often irreversible and these effects may have a steep dose-response relationship. Generally, these effects do not occur in everyone and may be caused by genetic polymorphisms. One day it may even be possible to predict them. However, in most cases we shall still have to rely on the play of chance if serious adverse effects are to be detected or not, depending on the presence of hypersusceptible subjects in the cohort. Table 1 gives the probability of the occurrence of one event with a certain population incidence in a group of a certain size. It can readily be seen that a life-threatening event that occurs in 1 : 1000 subjects (almost certainly precluding clinical use) will only have a 1 : 100 chance of being detected in the typical early development group size. It is usual for ethics committees not to accept studies when the design does not allow the objectives of the trial to be reached. It must be clear that the primary and (in a considerable number of Phase I trials) sole objective of determining tolerability is outdated on these grounds alone. Pharmacodynamics and pharmacokinetics are what early drug development is about, and trials should be designed and powered for this. If one is lucky (or unlucky) enough to detect a tolerability problem early so much the better, but this is just as likely a false positive. There is another reason for considering tolerability to be obsolete. Modern drug development attempts to produce molecules that are highly selective for a known target. If tolerability is the primary objective, the dose of such a drug would have to be increased until some clinically detectable events occurred. This dose could be orders of magnitude above what would be needed for the maximal pharmacological effect and might have non-specific effects that could endanger the subjects unnecessarily. This approach only works when the effective dose or plasma concentration is fairly close to the pharmacologically effective concentration. This is normally only the case for compounds with a small therapeutic margin, such as classical cytotoxic drugs. The tolerability event for the cytotoxic drugs is often bone marrow depression, and this effect on cell division or survival is accidentally a marker for efficacy. This is clearly not the case for most other medicines in development. The ceremonial use of the word tolerability can produce other dangers, as apparently intolerability can be ignored. The experimental rheological compound poloxamer produced clear intolerability in healthy subjects, with loin pain and proteinuria [15], but it was nevertheless used in a large clinical development programme in myocardial infarction, in which up to 8% of the patients had severe renal dysfunction (often irreversible) [16, 17]. A related problem is the starting dose of a new drug. When this is determined there is usually no human experience with the compound. A guideline from the FDA is the best existing help [18]. The approach is based on the NOAEL (the No Observed Adverse Event dose Level) for the most sensitive species in the preclinical toxicology programme. This dose is determined and the human dose calculated using empirical allometric scaling and safety factors. This is widely used, but when it is used in isolation it is flawed. The approach is clearly based on tolerability in animals. If for some reason animals tolerate the drug well, their tolerability will also greatly exceed the required doses. Other reasons for good tolerability could be a lack of pharmacological or biological response or differences in the absorption or clearance of the drug between animals and humans. Ironically, the primary pharmacological effect of the drug is not often monitored in animal toxicology studies. This has on occasion led to serious toxicity in humans (who do respond to the biological stimulus) [2, 19, 20]. This is not the fault of the guideline, which allows and suggests the use of alternative methods, including determination of the pharmacologically active dose (PAD), renamed the minimum anticipated biological effect level (MABEL) by the Duff committee, which examined the events surrounding the use of the drug TGN1412 [21, 22]. Tolerability and the NOAEL approach to dose selection figure at the top of lists in the guideline, but it is nowhere stated that this means that they have primary significance. Warrington questioned the approach in 1985 [23] and alternative approaches have been suggested [24]. However, progress seems to be limited, despite the availability of many techniques that would allow pharmacological characterization of new drugs early on. First administration to humans is still often undertaken by a standardized approach and design, with tolerability as the hallmark. Starting doses are based on tolerability in animals, with a chance of overdosing or worse. In this and recent issues of the Journal we have published several papers that include claims about tolerability. For instance, Forst et al. have studied an NMDA antagonist in neuropathic pain [25]. The compound was apparently ‘well tolerated’ in healthy volunteers, although high doses induced a rise in blood pressure. Systolic blood pressure rose after doses of 125 and 250 micrograms, but the increases were ‘clinically negligible’. After 500 micrograms the systolic pressure rose by as much as 16 mmHg, and when the dose was doubled again it rose by 30 mmHg. The study was performed in patients with neuropathy, many of whom had diabetes mellitus, and the authors concluded that these blood pressure effects determined the maximum tolerated dose. Unfortunately, we are not informed how much NMDA antagonism could have occurred at the doses that produced these dangerous rises in blood pressure, and neither was the study powered to detect effects on pain, so we remain unsure where on the dose-response curve this all happened. Despite this, the authors concluded that the compound was ‘reasonably well tolerated’ at doses up to 500 micrograms. While such a study design and conduct is entirely standard, one has to ask serious questions about the logic of such a conclusion. With a longer duration of treatment increases in systolic pressure can cause serious intolerability indeed, especially in patients with diabetes and vascular disease. There is another example in the paper by Stangier et al, who studied a new oral thrombin inhibitor [26]. They studied its pharmacokinetics and its pharmacodynamic effects on coagulation and determined PK/PD relationships, thereby fulfilling important objectives of clinical pharmacological research. The drug was again ‘well tolerated’ after single doses. The authors optimistically stated that hepatotoxicity was not detected, but perhaps they should have been reminded that this was highly unlikely, even if the drug was hepatotoxic. After multiple doses the drug was again ‘well tolerated’, although eight subjects experienced bleeding events, albeit none of them serious. They concluded that the drug has a favourable safety profile. The novel estrogen receptor modulator CHF4227, which also features in this issue, was also pronounced ‘well-tolerated’ in an interesting clinical pharmacology study including markers of efficacy [27]. These are excellent studies, and we do not criticize them with regard to the data they provide on human pharmacology or pharmacokinetics. However, statements of tolerability and safety that appear in many papers about drugs are often unsupported by the data. The studies are not methodologically sound enough to make such claims, which, even if true in the subjects studied, cannot be considered externally valid and may lead to misleading statements when used out of context. There appears to have been little progress since Sertürner and Prescott, but the use of biomarkers [28, 29], advanced measuring techniques, and modeling of pharmacokinetics and pharmacodynamics [28, 29] can all be included in first administration studies, yielding quantitative data rather than unsupported opinions [28, 29]. Are there then no drugs that can universally be acclaimed to be well tolerated and safe? Leckridge and Mathie, in a dispute about the efficacy of homoeopathy, give a clue in the correspondence section of this issue [30, 31], but perhaps this takes us outside the realm of the type of mechanistic drug research that is the basis of clinical pharmacology. We shall look critically at claims about tolerability and safety, and encourage the submission of studies which feature modern mechanistic approaches to early studies in man.

Referência(s)