Carta Acesso aberto Revisado por pares

Safety of drug-eluting stents: demystifying network meta-analysis

2007; Elsevier BV; Volume: 370; Issue: 9605 Linguagem: Inglês

10.1016/s0140-6736(07)61898-4

ISSN

1474-547X

Autores

Stuart J. Pocock,

Tópico(s)

Cerebrovascular and Carotid Artery Diseases

Resumo

Given concerns about the safety of drug-eluting stents, Christoph Stettler and colleagues (Sept 15, p 937)1Stettler C Wandel S Allemann S et al.Outcomes associated with drug-eluting and bare metal stents: a collaborative network meta-analysis.Lancet. 2007; 370: 937-948Summary Full Text Full Text PDF PubMed Scopus (1288) Google Scholar provide the most extensive meta-analysis to date comparing sirolimus-eluting stents, paclitaxel-eluting stents, and bare-metal stents: 38 trials in 18 023 patients.Unfortunately, their statistical methods are so complex, “an extension of multivariable Bayesian hierarchical random effects models for mixed multiple treatment comparisons” that many are mystified by whether the conclusions make sense. Their webtable of data by trial enables the following explanation by use of simple statistics.I concentrate on Stettler and colleagues' finding of significantly fewer myocardial infarctions for sirolimus-eluting stents compared with both paclitaxel-eluting and bare-metal stents. Since all trials had 1:1 randomisation, a sensible start is to add up the numbers of patients who had a myocardial infarction in head-to-head comparisons. For the 17 trials comparing sirolimus-eluting stents with paclitaxel-eluting stents there were 170 versus 203 myocardial infarctions. The approximate risk ratio is 170/203=0·84. The simplest statistical test comparing these two counts2Pocock SJ The simplest statistical test: how to check for a difference between treatments.BMJ. 2006; 332: 1256-1258Crossref PubMed Google Scholar is reliable when event rates are low and gives p=0·09.This result agrees with that of Stettler and colleagues' conventional meta-analysis: hazard ratio 0·84 (95% CI 0·69–1·02). So why does their main approach—the “network” meta-analysis—push this finding to a stronger level of significance?For sirolimus-eluting stents versus bare-metal stents (15 trials) there were 119 versus 142 myocardial infarctions (hazard ratio 0·86, p=0·12) and for paclitaxel-eluting stents versus bare-metal stents (eight trials) there were 135 versus 129 myocardial infarctions (hazard ratio 1·06, p=0·7). The network meta-analysis uses this indirect evidence that (1) bare-metal stents and paclitaxel-eluting stents had similar myocardial infarction risks, and (2) there is a non-significant trend towards fewer myocardial infarctions with sirolimus-eluting stents versus bare-metal stents, to reinforce the non-significant trend of fewer myocardial infarctions with sirolimus-eluting stents versus paclitaxel-eluting stents. As if by magic, the complex modelling in the network meta-analysis pushes the sirolimus-eluting stents versus paclitaxel-eluting stents comparison to p=0·045 and the sirolimus-eluting stents versus bare-metal stents comparison to p=0·03.This might be squeezing the data too hard. Virtually all other meta-analyses stick to head-to-head comparisons of treatments, which seems inherently wise. Use of indirect comparisons entails strong assumptions—ie, it ignores the fact that the bare-metal stent comparators in the sirolimus-eluting stents trials and paclitaxel-eluting stents trials are different.I am concerned about the increasing reliance on random-effects methods for meta-analyses. As they try to capture statistical heterogeneity between studies, they perversely increase the amount of weight given to small studies, so one small outlying trial can have undue influence.Fixed-effect methods avoid this problem and are more readily accessible to non-specialists (eg, adding up the numbers of myocardial infarctions gives meaningful insight). By all means do both if space allows, and I would encourage Stettler and colleagues to publish a fixed-effect head-to-head version of all their findings.Another concern is that any such meta-analysis combines evidence from trials that are substantially different in their design. For instance, we are including trials of primary percutaneous coronary intervention after myocardial infarction with trials in stable coronary disease, the definition of myocardial infarction across studies will be inconsistent, as will the angiographic eligibility criteria. These issues all raise concerns as to the extent to which the meta-analysis conclusions can be trusted3Thompson SG Pocock SJ Can meta-analyses be trusted?.Lancet. 1991; 338: 1127-1130Summary PubMed Scopus (404) Google Scholar and to whom the findings apply.I have served on data monitoring committees of studies sponsored by Boston Scientific and Johnson and Johnson, and consulted with Abbott Vascular. Given concerns about the safety of drug-eluting stents, Christoph Stettler and colleagues (Sept 15, p 937)1Stettler C Wandel S Allemann S et al.Outcomes associated with drug-eluting and bare metal stents: a collaborative network meta-analysis.Lancet. 2007; 370: 937-948Summary Full Text Full Text PDF PubMed Scopus (1288) Google Scholar provide the most extensive meta-analysis to date comparing sirolimus-eluting stents, paclitaxel-eluting stents, and bare-metal stents: 38 trials in 18 023 patients. Unfortunately, their statistical methods are so complex, “an extension of multivariable Bayesian hierarchical random effects models for mixed multiple treatment comparisons” that many are mystified by whether the conclusions make sense. Their webtable of data by trial enables the following explanation by use of simple statistics. I concentrate on Stettler and colleagues' finding of significantly fewer myocardial infarctions for sirolimus-eluting stents compared with both paclitaxel-eluting and bare-metal stents. Since all trials had 1:1 randomisation, a sensible start is to add up the numbers of patients who had a myocardial infarction in head-to-head comparisons. For the 17 trials comparing sirolimus-eluting stents with paclitaxel-eluting stents there were 170 versus 203 myocardial infarctions. The approximate risk ratio is 170/203=0·84. The simplest statistical test comparing these two counts2Pocock SJ The simplest statistical test: how to check for a difference between treatments.BMJ. 2006; 332: 1256-1258Crossref PubMed Google Scholar is reliable when event rates are low and gives p=0·09. This result agrees with that of Stettler and colleagues' conventional meta-analysis: hazard ratio 0·84 (95% CI 0·69–1·02). So why does their main approach—the “network” meta-analysis—push this finding to a stronger level of significance? For sirolimus-eluting stents versus bare-metal stents (15 trials) there were 119 versus 142 myocardial infarctions (hazard ratio 0·86, p=0·12) and for paclitaxel-eluting stents versus bare-metal stents (eight trials) there were 135 versus 129 myocardial infarctions (hazard ratio 1·06, p=0·7). The network meta-analysis uses this indirect evidence that (1) bare-metal stents and paclitaxel-eluting stents had similar myocardial infarction risks, and (2) there is a non-significant trend towards fewer myocardial infarctions with sirolimus-eluting stents versus bare-metal stents, to reinforce the non-significant trend of fewer myocardial infarctions with sirolimus-eluting stents versus paclitaxel-eluting stents. As if by magic, the complex modelling in the network meta-analysis pushes the sirolimus-eluting stents versus paclitaxel-eluting stents comparison to p=0·045 and the sirolimus-eluting stents versus bare-metal stents comparison to p=0·03. This might be squeezing the data too hard. Virtually all other meta-analyses stick to head-to-head comparisons of treatments, which seems inherently wise. Use of indirect comparisons entails strong assumptions—ie, it ignores the fact that the bare-metal stent comparators in the sirolimus-eluting stents trials and paclitaxel-eluting stents trials are different. I am concerned about the increasing reliance on random-effects methods for meta-analyses. As they try to capture statistical heterogeneity between studies, they perversely increase the amount of weight given to small studies, so one small outlying trial can have undue influence. Fixed-effect methods avoid this problem and are more readily accessible to non-specialists (eg, adding up the numbers of myocardial infarctions gives meaningful insight). By all means do both if space allows, and I would encourage Stettler and colleagues to publish a fixed-effect head-to-head version of all their findings. Another concern is that any such meta-analysis combines evidence from trials that are substantially different in their design. For instance, we are including trials of primary percutaneous coronary intervention after myocardial infarction with trials in stable coronary disease, the definition of myocardial infarction across studies will be inconsistent, as will the angiographic eligibility criteria. These issues all raise concerns as to the extent to which the meta-analysis conclusions can be trusted3Thompson SG Pocock SJ Can meta-analyses be trusted?.Lancet. 1991; 338: 1127-1130Summary PubMed Scopus (404) Google Scholar and to whom the findings apply. I have served on data monitoring committees of studies sponsored by Boston Scientific and Johnson and Johnson, and consulted with Abbott Vascular.

Referência(s)
Altmetric
PlumX