British HIV Association guidelines for the treatment of HIV‐1‐infected adults with antiretroviral therapy 2008
2008; Wiley; Volume: 9; Issue: 8 Linguagem: Inglês
10.1111/j.1468-1293.2008.00636.x
ISSN1468-1293
Autores Tópico(s)HIV-related health complications and treatments
ResumoTable of contents 1.0 Introduction 2.0 Methodology 2.1 Basing recommendations on evidence 2.2 Implications for research 2.3 Use of surrogate marker data 2.4 Issues concerning design and analysis of clinical trials 2.4.1 Trial designs 2.4.2 Viral load outcome measures 2.4.3 Noninferiority 2.4.4 Cross-study comparisons and presentation of data 2.5 Adverse event reporting 3.0 When to start 3.1 Primary HIV infection 3.2 Established HIV infection 3.3 Patients with a CD4 count >350 cells/μL 3.4 Comorbidities 4.0 What to start with 4.1 Which HAART regimen is best? 4.2 Recommendations 4.3 Two NRTIs plus an NNRTI 4.3.1 Efavirenz (preferred regimen) 4.3.2 Nevirapine 4.4 Two NRTIs plus a boosted PI 4.4.1 Boosted lopinavir 4.4.2 Boosted fosamprenavir 4.4.3 Boosted saquinavir 4.4.4 Boosted or unboosted atazanavir 4.4.5 Boosted darunavir (unlicensed for naïve patients) 4.5 Three NRTIs 4.6 Choice of two NRTIs 4.7 Coformulated two NRTIs 4.7.1 Tenofovir/emtricitabine (Truvada) 4.7.2 Abacavir/lamivudine (Kivexa) 4.7.3 Zidovudine/lamivudine (Combivir) 4.8 Other two-NRTI combinations 4.9 Conclusions 5.0 Virological failure: after first-line treatment 5.1 Viral load blips 5.2 Sustained viral load rebound 5.3 Changing therapy 5.4 Virological failure with no resistance 5.5 First-line virological failure with PI mutations 5.6 Virological failure with NNRTI mutations 5.7 Virological failure with NRTI mutations alone 6.0 Subsequent virological failure 6.1 The patient with therapy options 6.2 The patient with few or no therapy options: continue, interrupt or change therapy? 6.2.1 Continuing the failing regimen 6.3 Treatment interruption 6.4 Change 7.0 New drugs 7.1 Etravirine (TMC-125) 7.1.1 Pharmacokinetics 7.1.2 Resistance 7.1.3 Efficacy, safety and tolerability 7.2 Maraviroc 7.2.1 Pharmacokinetics 7.2.2 Resistance 7.2.3 Efficacy, safety and tolerability 7.3 Integrase inhibitors 7.3.1 Raltegravir in treatment-experienced patients 7.3.2 Resistance 7.3.3 Raltegravir in treatment-naïve patients 8.0 Treating patients with chronic hepatitis B or C 8.1 Hepatitis B 8.1.1 When to treat 8.1.2 What to treat with 8.2 Hepatitis C 8.2.1 When to treat 8.2.2 What to treat with 8.2.3 Avoiding antiretroviral hepatotoxicity 8.2.4 Recommendations 9.0 Guidelines for the management of metabolic complications in HIV infection 9.1 Lipid abnormalities 9.1.1 Evaluation of risk 9.1.2 Which calculator to use 9.1.3 Treatment of lipid disorders 9.1.4 Switching ART 9.1.5 Lipid-lowering treatments 9.1.6 Which agents to use 9.2 Insulin resistance and diabetes 9.2.1 Recommendations for assessment and monitoring of insulin resistance 9.2.2 Treatment 9.3 Prevention and management of lipodystrophy 9.3.1 Assessment of lipodystrophy 9.4 Management of lipoatrophy 9.4.1 Surgical intervention 9.5 Lipohypertrophy 9.5.1 Prevention 9.5.2 Pharmacological intervention 9.5.3 Surgical therapy 9.6 Lactic acidosis and hyperlactataemia 10.0 Recommendations for resistance testing 10.1 Treatment-naïve patients 10.2 Treatment-experienced patients 10.3 Key principles in the interpretation of antiretroviral resistance in treatment-experienced patients 10.3.1 General recommendations 11.0 Adherence 11.1 Assessing adherence 11.2 Interventions to support adherence 11.3 Costs 12.0 Pharmacology 12.1 Drug interactions 12.2 Therapeutic drug monitoring (TDM) 12.3 Stopping therapy 12.4 Pharmacogenetics 13.0 HIV testing 14.0 Cost-effectiveness 15.0 Conflict of interest 16.0 References 17.0 Appendix The 2008 BHIVA Guidelines have been updated to incorporate all the new relevant information (including presentations at the 15th Conference on Retroviruses and Opportunistic Infections 2008) since the last iteration. The guidelines follow the methodology outlined below and all the peer-reviewed publications and important, potentially treatment-changing abstracts from the last 2 years have been reviewed. The translation of data into clinical practice is often difficult even with the best possible evidence (i.e. two randomized controlled trials) because of trial design, inclusion criteria and precise surrogate marker endpoints (see Appendix). The recommendations based upon expert opinion have the least good evidence but perhaps provide an important reason for writing the guidelines to produce a consensual opinion about current practice. It must, however, be appreciated that such opinion is often wrong and should not stifle research to challenge it. Similarly, although the Writing Group seeks to provide guidelines to optimize treatment, such care needs to be individualized and we have not constructed a document that we would wish to see used as a ‘standard’ for litigation. The Writing Group used an evidence-based medicine approach to produce these guidelines. In reality, if only the most reliable form of clinical evidence were taken into account (i.e. results of one or more randomized controlled trials with clinical endpoints), it would be impossible to formulate these guidelines. Many important aspects of clinical practice remain to be formally evaluated and very few trials with clinical endpoints are ongoing or planned. Many trials have been performed in order to obtain licensing approval for a drug. In many cases, they are the only source of evidence for comparing two drug regimens. However, the designs are not ideally suited to addressing questions concerning clinical use. The most significant drawbacks of such trials are their short duration and the lack of follow-up data on patients who switch therapy. In most cases, the only available data on long-term outcomes are from routine clinical cohorts. While such cohorts are representative of routine clinical populations, the lack of randomization to different regimens means that comparisons between the outcomes of different regimens are highly susceptible to bias [1,2]. Expert opinion forms an important part of all consensus guidelines; however, this is the least valuable and robust form of evidence. Unless guidelines are interpreted and applied cautiously and sensibly, valuable research initiatives that might improve standards of care will be stifled. It would be wrong to suggest that certain controlled clinical trials would be unethical if they did not conform to the guidelines, especially when these guidelines are based mainly upon expert opinion rather than more reliable evidence [3]. CD4 cell counts and plasma viral load are used as markers of the effect of antiretroviral therapy (ART). Reduction in viral load leads to a rise in peripheral blood CD4 cell count, with greater rises being seen in those with greater and more sustained viral suppression [4]. Changes in these markers in response to therapy are strongly associated with clinical response [5–9]. CD4 cell counts measured in people on ART have been associated with a risk of AIDS-defining diseases no higher than that expected in untreated individuals with similar CD4 cell counts [10–13]. The CD4 cell count is a better indicator of the immediate risk of AIDS-defining diseases than the viral load in those on ART [14,15]. However, it should be remembered that CD4 cell count and viral load responses do not precisely reflect the expected clinical outcome and are not perfect surrogates of the clinical response [9,16,17]. This is because the drugs have other effects with clinical consequences besides those reflected in viral load and CD4 cell count changes. Even so, for patients with a given CD4 cell count and viral load, the risk of AIDS disease appears to be similar, regardless of the specific antiretroviral drugs being used [18]. The relatively short length of trials designed to obtain drug approval means that, at the time of licensing, little is known about the long-term consequences of a drug. As stated above, most antiretroviral drug trials are performed by pharmaceutical companies as part of their efforts to obtain licensing approval and the designs are often not ideally suited to deriving information on using the drugs in clinical practice. Besides the short duration of follow-up, their key limitation is the lack of data on outcomes in people who change from the original randomized regimen and a description of what those new regimens are. The results are, therefore, only clearly interpretable as long as a very high proportion of participants remain on the original, allocated regimens. Clinical questions about which drugs to start with, or switch to, require longer term trials that continue following patients despite changes to the original treatment. Such changes in regimen are common in real-life practice and so, from a clinical perspective, it makes little sense to ignore what happens to patients after a specific regimen has been discontinued. The use of a given drug can affect outcomes long after it has been stopped. For example, it may select for virus resistant to drugs not yet encountered or cause toxicities that overlap with those caused by other drugs. However, interpretation of such longer term trials is not straightforward, and account must be taken of which drugs were used subsequent to the original regimen in each arm. The Writing Group generally favours entry into well-constructed trials for patients whose clinical circumstances are complex, with a number of specific instances being mentioned in these guidelines. NAM maintains a list of trials currently recruiting in the UK at http://www.aidsmap.com, and treatment units should work to ensure arrangements are in place to enable eligible patients to enter trials at centres within or indeed outside their clinical networks. In most efficacy trials, treatments are compared in terms of viral load as defined by plasma HIV RNA. Depending on the target population, the primary outcome measure may be defined to include the achievement of viral suppression below a certain limit (usually 50 HIV-1 RNA copies/mL) at a pre-specified time (e.g. 24 or 48 weeks after randomizations), time to viral rebound or time-weighted average change from baseline. To avoid selection bias, all enrolled patients must be included in an analysis comparing the treatments, and all in the group to which they were randomized, even if no longer taking the treatment they were allocated (the intent-to-treat principle). The inability to assess outcomes for some patients, leading to missing data, for example as a result of patient dropout before completion of the trial, is a potential source of bias. The frequency of and reasons for missing outcomes may be affected by many factors, including the efficacy of treatments, toxicity and the length of follow-up. Interpretation of the results of the trial is particularly problematic if a substantial number of patients drop out for reasons related to the outcome whether by design, as in many pharmaceutical industry trials where patients are withdrawn when they change their randomized treatment, or otherwise. This problem can be addressed at three levels: in the design, conduct and analysis stages of the trial. Changes in treatment during the trial must be anticipated and it is necessary to continue collecting data on all patients, even if they have switched from the original regimen, thus avoiding missing data by design and/or poor implementation. While several analytical methods have been published for handling missing outcome in clinical trials, all make assumptions that cannot be completely verified. Whichever method is used for handling missing outcomes at the analysis stage must be pre-specified in the protocol or the statistical analysis plan. When the outcome is the proportion of people with viral load below 50 copies/mL at a given time-point, the approach widely adopted is to assign an outcome of failure to achieve a value below 50 copies/mL to all patients with missing outcome (and those who have switched from the randomized treatment, regardless of whether they remain under follow-up). This is known as the missing equals failure (MEF) approach [14–21]. This approach to missing outcome is used in trials for drug licensing because it considers anyone who has to stop the drug of interest as having failed and thus prevents any tendency for drugs used by a patient after the drug of interest has failed to influence the trial results. Such an approach implicitly equates failure of a regimen as a consequence of inadequate potency and/or viral drug resistance not only with the inability to tolerate a regimen compared with other possible approaches because of pill burden, inconvenience and/or adverse effects but also with assessments being missing for other reasons, including randomly missing visits, even though the implications of these various outcomes are likely to be substantially different. This approach is often labelled conservative compared with other possible approaches because it gives a minimum proportion of patients with viral load below 50 copies/mL for any given treatment group over all possible approaches. However, the primary purpose of an endpoint is to compare treatment arms and the reasons for missing outcomes may well differ between treatments. In this context, this approach is not conservative in any general sense and its indiscriminate use without consideration of its inherent limitations involves a degree of risk of bias that could be greater than simply ignoring missing values. For these reasons, trials that are conducted for purposes of licensing a particular drug, and which treat stopping of the drug as treatment failure and ignore outcomes occurring after the drug has stopped, do not always provide the type of information that is most useful for clinical practice. In the past, trials have generally considered whether the viral load is below 50 copies/mL or not at a given time-point (e.g. 48 weeks). In recent years, the tendency has been to consider whether virological failure (or ‘loss of virological response’, usually defined as two consecutive values above 50 copies/mL) has occurred by a certain time-point, rather than whether the viral load at the time-point is below 50 copies/mL or not, as described above. In the (common) case where missing viral load values and switches in therapy are treated the same as values above 50 copies/mL, this approach uses a ‘time to loss of virological response’ (TLOVR) algorithm [20]. The two approaches will give similar but not identical results; for example, patients can fulfil the definition of loss of virological response before 48 weeks but then have a viral load value below 50 copies/mL at 48 weeks itself, without any change in regimen. Randomization in a trial ensures balance in prognosis between the treatment arms at baseline. Inability to assess outcomes for some patients can disturb this balance and create bias in the comparison between the treatment arms. In order to avoid risk of such bias, analysis by intent to treat includes outcomes for all randomized patients. So-called ‘on-treatment’ analyses consider outcomes only in those still receiving the original allocated treatment. Here, the difference between assessing the proportion with viral load below 50 copies/mL at a given time-point and assessing the proportion with viral load above 50 copies/mL by a given time-point becomes greater. In the context of an assessment of the proportion of people with viral load below 50 copies/mL at a given time-point, on-treatment analysis makes little sense because therapy has been switched in patients who experience viral load rebound during a trial, so the only patients who remain on the regimen are those with viral load below 50 copies/mL. Hence, all regimens that lead to a viral load below 50 copies/mL in at least one person should lead to a value of 100%, unless there are patients who have viral load above 50 copies/mL at the time-point but are yet to have their regimen switched. In contrast, an assessment of whether the viral load was above 50 copies/mL by a given time-point (i.e. time to virological failure or loss of virological response), which censors observation on patients once they have switched from the original randomized regimen, may be more revealing, but is still subject to potential bias. In contrast to superiority trials where the primary objective is to demonstrate that a new treatment regimen, or strategy, is more efficacious than a well-established treatment, the aim of a noninferiority trial is to show that there is no important loss of efficacy if the new treatment is used instead of the established reference. This is particularly relevant in evaluating simplification strategies where the new treatment strategy is better than the reference treatment in aspects other than efficacy, for example toxicity, tolerability or cost. A critical aspect of noninferiority trials is the judgement of what degree of possible loss of efficacy will be tolerated – the noninferiority margin (sometimes referred to as the delta). The choice of the noninferiority margin depends on what is considered to be a clinically unimportant difference in efficacy taking into account other potential advantages of the new treatment. To demonstrate noninferiority, large numbers of patients are usually required because of the need to exclude the possibility that there is even moderate loss of efficacy with the new treatment. The trial protocol must pre-specify the noninferiority margin (e.g. the proportion with viral load below 50 copies/mL at 48 weeks, in people receiving the new treatment, is not smaller than the same proportion in the reference treatment by more than 5%). As an illustration of the interpretation of the results of noninferiority trials, we shall consider the case where the primary efficacy outcome is the proportion of participants with viral load below 50 copies/mL at 48 weeks. Conclusions on the noninferiority of a new treatment are then based on the lower confidence bound, which is the lower limit of the one-sided 95% (or sometimes 97.5%) confidence interval for the difference (new – standard) between the outcome for the new treatment and the outcome for the standard treatment. Noninferiority is indicated when this lower confidence bound for the difference between the two treatments excludes loss of efficacy greater than the pre-specified noninferiority margin. So, for example, if the proportion with viral load <50 copies/mL with the standard treatment is 85% and the corresponding proportion with the new treatment is 87%, then the observed difference in proportions (new – standard) is 2%. If the lower confidence bound of this difference is −8%, this can be interpreted as meaning that (within the appropriate level of confidence) the new treatment is at most 8% inferior to the standard treatment. If (and only if) our pre-specified noninferiority margin is 8% or above then this means we would conclude that the new treatment is noninferior to the standard. If the proportions were instead 85% for the standard treatment and 79% for the new treatment, with a difference of −6% and lower confidence bound of −11%, then noninferiority of the new treatment could again be concluded if the pre-specified noninferiority margin was 11% or higher regardless of whether the observed difference of −6% was significantly different from zero; i.e. even if the proportion of participants receiving the new treatment with viral load <50 copies/mL was significantly lower than the corresponding proportion for the standard treatment. If, however, the pre-specified non-inferiority margin was less than 11% (e.g. 5%) and we obtained the same outcome data, then noninferiority would not be established even if the difference between the two treatments was not statistically significant. This illustrates the importance of a suitable choice of a noninferiority margin. These margins have tended to range from 10 to 15%, which seems high. The smaller the noninferiority margin, the stricter the test for the new treatment but the larger the sample size required. It should be noted that finding that the response to the new treatment is not significantly inferior to that of the standard treatment in a significance test is not evidence for noninferiority. It is also important to note that a very high standard of trial conduct (e.g. minimizing violations of entry criteria, nonadherence to allocated regimens and loss to follow-up) is more critical in noninferiority than in superiority trials. Such deviations from the protocol would tend to bias the difference between the two treatments towards zero and thus increase the chance of erroneously concluding noninferiority. Two frequently asked questions are: Can we infer superiority or inferiority of a new treatment from the results of a trial designed to establish its noninferiority to the standard treatment? What about inferring noninferiority of the new treatment on the basis of the results of a trial designed to demonstrate its superiority? The answer to the first question is ‘yes’. Conclusions of superiority (or inferiority) are based, not on the one-sided confidence interval as for noninferiority, but on the standard 95% two-sided confidence interval. If the proportion of patients with viral load <50 copies/mL with the standard treatment is 85% and the corresponding proportion with the new treatment is 91%, then the observed difference in proportions (new – standard) is 6%. If the 95% confidence interval for this difference is 1–11%, with the lower bound greater than 0, then this can be interpreted as demonstrating the superiority of the new treatment at the 5% level relative to the standard treatment in a straightforward way regardless of the value of the pre-specified noninferiority margin. If the proportions are instead 85% for the standard treatment and 76% for the new treatment, with a difference of −9% and a two-sided 95% confidence interval of −12 to −6%, then this can be interpreted as a demonstration of inferiority of the new treatment provided that the pre-assigned noninferiority margin is 6% or lower. If instead the pre-assigned noninferiority margin is 8%, inferiority is not established, notwithstanding the highly statistically significant lower efficacy of the new treatment, because an 8% difference has been defined a priori as a clinically unimportant difference. This again highlights the importance of pre-specifying a sufficiently low noninferiority margin truly reflecting the highest clinically nonsignificant loss of efficacy with the new treatment. Finally, in answer to the second question, any inference about noninferiority from the results of a superiority trial would not be valid because the noninferiority margin cannot be assigned post hoc with knowledge of interim or final data from the trial. It is tempting to compare results of individual drug combinations assessed in different trials. Such comparisons are, however, difficult to interpret because of differences in entry criteria (particularly with respect to viral load and CD4 cell counts), methods of analysis (e.g. intent to treat vs. on-treatment), degrees of adherence and sensitivities of viral load assays [22,23]. Many previously unsuspected side-effects of ART have been reported only after drug licensing. It is vital that prescribers report any unsuspected adverse events as soon as possible so that these events are swiftly recognized. A yellow-card scheme, organized by the Medicines and Healthcare products Regulatory Agency, operates in the UK for reporting adverse events relating to the treatment of HIV (http://yellowcard.mhra.gov.uk). The rationale for treating with antiretroviral drugs in primary HIV infection is as follows: Preservation of specific anti-HIV immune responses that would otherwise be lost, and which are associated with long-term nonprogression in untreated individuals. Reduction in morbidity associated with high viraemia and CD4 depletion during acute infection. Reduction in the risk of onward transmission of HIV. Multiple studies have shown conflicting results of therapy [24], with varying short-term effects on immunological markers, viral load and CD4 lymphocyte count. However, in order to make a firm recommendation, the results of a randomized prospective study are needed. The Medical Research Council (MRC) SPARTAC study is fully recruited and initial results are anticipated in 2010. In the meantime, treatment in primary infection (outside a prospective study) should only be routinely considered in those with: neurological involvement any AIDS-defining illness a CD4 cell count persistently 20% over 10 years), are likely to benefit more from earlier treatment (Table 2). Data from the SMART study [29] confirm the impression from previous cohort studies [30,31] that there is a continual gradient of increased risk of both death and disease progression associated with lower CD4 cell counts and no specific clear threshold at which risk increases. Furthermore, SMART has shown that untreated HIV infection is associated with greater risks of morbidity and mortality that have not previously been recognized to be HIV-related, including those attributable to non-AIDS-defining malignancies. In those individuals entering the SMART study who were either treatment-naïve or who had not been on therapy for the previous 6 months, the absolute risk of a new diagnosis of opportunistic disease or a serious non-AIDS event in the treatment deferral arm was 7.0 per 100 patient-years, compared with 1.6 in the virological suppression arm [32]. However, this also means that 14 patient-years of therapy were required to prevent one serious progression if treatment was started before the CD4 count fell below 350 cells/μL. As a result of these factors, our recommendation is that the initiation of therapy should be recommended in all patients with a CD4 count of <350 cells/μL (confirmed on at least one consecutive sample, in the absence of any obvious reason for transient CD4 depletion). Several studies have suggested that CD4 percentage may have a small additional prognostic value independently of the total CD4 cell count, although the data are conflicting [33,34]. This may prompt deferral of antiretroviral treatment in some patients with CD4 counts 350 cells/μL but with low CD4 percentages {e.g. <14%, where Pneumocystis carinii pneumonia (PCP) prophylaxis is indicated [35]; some studies have indicated increased risk of disease progression in patients with CD4 percentages 350 cells/μL, multiple cohort studies have suggested that there might be benefits to ART. This is supported by data from the substudy of patients not on therapy at entry to the SMART study [32]. Some of the previous concerns about earlier initiation of therapy have been reduced because of the availability of simpler, less toxic and better tolerated antiretroviral regimens, improved pharmacokinetic profiles and increasing options after virological failure. For the majority of patients, the absolute risk of deferring therapy until the CD4 count is <350 cells/μL is likely to be low, but in a subgroup at particularly high risk of clinical events that may be preventable by ART, this is not the case. For all these reasons, in a small number of patients, treatment may be started or considered before the C
Referência(s)