Editorial Acesso aberto Revisado por pares

Publication bias: Graphical and statistical methods

2021; Elsevier BV; Volume: 159; Issue: 2 Linguagem: Inglês

10.1016/j.ajodo.2020.11.005

ISSN

1097-6752

Autores

Loukia M. Spineli, Nikolaos Pandis,

Tópico(s)

Health Sciences Research and Education

Resumo

Various statistical approaches and visual tools have been developed to detect, estimate, and evaluate the impact of publication bias in meta-analysis results. In this article, we present the most popular statistical methods and graphic tools to address publication bias using an example. The fixed-effect model is known to favor larger studies; hence, it assigns greater weights to these studies. By contrast, the random-effects model aims to balance the weights more evenly across small and large studies. With substantial small-study effects, where an intervention seems to be more beneficial in smaller studies, the random-effects summary effect size will present the intervention as being more beneficial than the fixed-effect summary effect size.1Boutron I. Page M.J. Higgins J.P.T. Altman D.G. Lundh A. Hróbjartsson A. Chapter 7. Considering bias and conflicts of interest among the included studies.in: Higgins J.P.T. Thomas J. Chandler J. Cumpston M. Li T. Page M.J. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane, Chichester, United Kingdom2019Crossref Scopus (97) Google Scholar The small-study effects issue is one of many factors responsible for heterogeneity in the meta-analysis results. This means that if an intervention seems to be more beneficial under the random-effects model than under the fixed-effect model, the researchers should investigate further whether they should attribute this difference to the small-study effects alone (the intervention was more effective in smaller studies) or to other study characteristics.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar The forest plot in Figure 1 displays the meta-analysis results on the effectiveness of fluoride gel against placebo for preventing dental caries in children and adolescents under the random-effects and fixed-effect models.3Marinho V.C.C. Higgins J.P. Logan S. Sheiham A. Fluoride gels for preventing dental caries in children and adolescents.Cochrane Database Syst Rev. 2002; : CD002280PubMed Google Scholar The point estimates differ very slightly, and the confidence intervals overlap perfectly. Note how much wider the confidence interval under the random-effects model is compared with the confidence interval under the fixed-effect model. The similarity of the results is an indication of the possible low impact of small-study effects. Forest plots provide only a visual exploration; hence, further investigation is required (eg, funnel plot and proper statistical methods) to determine any small-study effects and possible publication bias. Another graphical tool to investigate the relationship between study size and effect size is the funnel plot.4Borenstein M. Hedges L.V. Higgins J.P.T. Rothstein H.R. Chapter 30. Publication bias. In: Introduction to Meta-Analysis. John Wiley & Sons, Chichester, United Kingdom2009Crossref Scopus (10298) Google Scholar The funnel plot is a scatter plot in which the effect sizes are plotted on the x-axis and the standard errors of the effect sizes on the y-axis. The spread of the points creates a pattern like a funnel. In the funnel plot, the points corresponding to studies with smaller sample size are scattered on the bottom of the funnel (because they yield effects with larger standard errors), and points corresponding to studies with larger sample size are scattered in a narrow range of values at the top of the funnel (because they yield effects with smaller standard errors). Instead of standard errors, we could have used the sample size of the studies or the variance of the effect sizes.4Borenstein M. Hedges L.V. Higgins J.P.T. Rothstein H.R. Chapter 30. Publication bias. In: Introduction to Meta-Analysis. John Wiley & Sons, Chichester, United Kingdom2009Crossref Scopus (10298) Google Scholar However, only the standard errors can spread out the points on the bottom of the funnel where the smaller studies are found and create a funnel-like pattern.4Borenstein M. Hedges L.V. Higgins J.P.T. Rothstein H.R. Chapter 30. Publication bias. In: Introduction to Meta-Analysis. John Wiley & Sons, Chichester, United Kingdom2009Crossref Scopus (10298) Google Scholar To determine whether there is publication bias or small-study effects, we need to understand how the points are distributed. The symmetrical distribution of the points about the summary effect size is an indication of the absence of possible small-study effect or publication bias. However, any asymmetrical distribution of the points may support the presence of possible small-study effect or publication bias.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar,4Borenstein M. Hedges L.V. Higgins J.P.T. Rothstein H.R. Chapter 30. Publication bias. In: Introduction to Meta-Analysis. John Wiley & Sons, Chichester, United Kingdom2009Crossref Scopus (10298) Google Scholar The typical pattern in the presence of small-study effects is a prominent asymmetry at the bottom that progressively disappears as we move up to larger studies.4Borenstein M. Hedges L.V. Higgins J.P.T. Rothstein H.R. Chapter 30. Publication bias. In: Introduction to Meta-Analysis. John Wiley & Sons, Chichester, United Kingdom2009Crossref Scopus (10298) Google Scholar Figure 2 illustrates the funnel plot of our example. The effect sizes have been estimated using the fixed-effect model. The black line displays the summary effect size, and the red dotted line refers to no effect. The diagonal lines represent the pseudo 95% confidence limits around the summary effect size for each standard error on the vertical axis. In the absence of heterogeneity, 95% of the studies should be scattered within the funnel as defined by these diagonal lines. The asymmetry of the funnel plot is evident; toward the bottom of the plot, there is only 1 small study, whereas the majority of the studies are scattered above the middle of the funnel plot. Four studies are outside the pseudo 95% confidence limits. The absence of smaller studies (equivalently, only 1 small study) at the bottom of the plot and studies with small effects (on the right side of the plot) are strong indications of possible publication bias in the meta-analysis results. In this case, we suspect that the summary effect may be biased. However, the absence of smaller studies might also indicate that the effectiveness of fluoride gel was likely to be investigated mainly in moderate and large studies. Therefore, publication bias cannot be perceived as the only cause of funnel asymmetry.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar Although funnel plot asymmetry has been associated mostly with publication bias and small-study effects, they are not the sole reasons for an asymmetric funnel plot.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar Was the literature search intensive and comprehensive? Is there any difference in the quality of the studies? For instance, it has been shown that small studies with poor design and conduct tend to overestimate the effect sizes; hence, this lower methodological quality of smaller studies might contribute to the asymmetry at the bottom of the funnel plot.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar Is there a plausible reason to explain the larger effect sizes in smaller trials? For instance, the choice of the measure of effect can also result in funnel plot asymmetry,2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar It has been shown that risk difference provides more heterogeneous effect sizes than risk ratio or odds ratio because they tend to underestimate the effect sizes in studies with low event rates5Deeks J.J. Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes.Stat Med. 2002; 21: 1575-1600Crossref PubMed Scopus (438) Google Scholar; hence, they can result in funnel plot asymmetry. Therefore, various reasons related to study characteristics and/or the analysis of the study results should be considered before we attribute the asymmetry entirely to publication bias.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar,6Peters J.L. Sutton A.J. Jones D.R. Abrams K.R. Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.J Clin Epidemiol. 2008; 61: 991-996Abstract Full Text Full Text PDF PubMed Scopus (827) Google Scholar Meta-regression and subgroup analysis are the tools to investigate possible associations between study characteristics and effect sizes. Publication bias has been equated mainly with the suppression of studies with statistically nonsignificant results. The funnel plot in Figure 2 fails to display the level of statistical significance of the effect sizes to assess whether the level of significance is likely to explain the asymmetry.6Peters J.L. Sutton A.J. Jones D.R. Abrams K.R. Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.J Clin Epidemiol. 2008; 61: 991-996Abstract Full Text Full Text PDF PubMed Scopus (827) Google Scholar A contour-enhanced funnel plot is an extension of the conventional funnel plot, which incorporates contours of the level of significance of the effect sizes.6Peters J.L. Sutton A.J. Jones D.R. Abrams K.R. Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.J Clin Epidemiol. 2008; 61: 991-996Abstract Full Text Full Text PDF PubMed Scopus (827) Google Scholar If more studies are present in the contours of statistical significance than in areas of statistical nonsignificance, the funnel plot will be asymmetric.6Peters J.L. Sutton A.J. Jones D.R. Abrams K.R. Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.J Clin Epidemiol. 2008; 61: 991-996Abstract Full Text Full Text PDF PubMed Scopus (827) Google Scholar This asymmetry is indicative of the inherent association between effect size and level of significance.6Peters J.L. Sutton A.J. Jones D.R. Abrams K.R. Rushton L. Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry.J Clin Epidemiol. 2008; 61: 991-996Abstract Full Text Full Text PDF PubMed Scopus (827) Google Scholar Studies that tend to favor the experimental than the control intervention are more likely to provide statistically significant results, and hence, to be present in the contours of statistical significance. By contrast, if more studies are absent in the contours of statistical significance than in areas of statistical nonsignificance, then the risk of publication bias might be low. Figure 3 depicts the contour-enhanced funnel plot of our example. The black line displays the summary effect size, and the red dotted line refers to no effect. The gray-scaled contours correspond to different levels of significance, as indicated in the accompanying box. Studies are missing in all levels of significance both on the right side and at the bottom of the plot, creating pronounced funnel plot asymmetry. The fact that mostly moderate and large studies with statistically significant larger effect sizes are present indicates that publication bias may be the cause of funnel asymmetry; hence, further investigation is needed to detect possible associations between effect size and study characteristics. Because the interpretation of the funnel plot is subjective, statistical tests that assess the relationship between sample size and effect size have been developed. The test for small-study effects proposed by Egger et al2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar is the most widely used approach (Fig 4). A regression line of the effect sizes against their standard errors (using the inverse variance weighting scheme) is drawn on the funnel plot.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar If there is no association between effect size and standard error, the regression line will be parallel to the x-axis.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar To examine whether there is evidence for statistically significant asymmetry, we can perform a statistical test, known as Egger's test. Specifically, we test whether the intercept is equal to 0 which implies that the regression line runs through the origin of the plot, and hence, the funnel plot is symmetrical. If the P value is below 0.10, we may conclude that there is evidence asymmetry because of small-study effects.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar However, this test suffers the caveat of any test of significance: it yields results of low power.1Boutron I. Page M.J. Higgins J.P.T. Altman D.G. Lundh A. Hróbjartsson A. Chapter 7. Considering bias and conflicts of interest among the included studies.in: Higgins J.P.T. Thomas J. Chandler J. Cumpston M. Li T. Page M.J. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane, Chichester, United Kingdom2019Crossref Scopus (97) Google Scholar Therefore, the rejection of the null hypothesis should not be considered as definite evidence of asymmetry. In addition, tests for funnel plot asymmetry should be performed when there are at least 10 studies in the meta-analysis; otherwise, the power will be too low to detect any true relationship.1Boutron I. Page M.J. Higgins J.P.T. Altman D.G. Lundh A. Hróbjartsson A. Chapter 7. Considering bias and conflicts of interest among the included studies.in: Higgins J.P.T. Thomas J. Chandler J. Cumpston M. Li T. Page M.J. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane, Chichester, United Kingdom2019Crossref Scopus (97) Google Scholar These tests should not be implemented when the studies have similar sample sizes.1Boutron I. Page M.J. Higgins J.P.T. Altman D.G. Lundh A. Hróbjartsson A. Chapter 7. Considering bias and conflicts of interest among the included studies.in: Higgins J.P.T. Thomas J. Chandler J. Cumpston M. Li T. Page M.J. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane, Chichester, United Kingdom2019Crossref Scopus (97) Google Scholar Please note that small-study effects are not the only source of asymmetry in funnel plots; language bias, citation bias, true heterogeneity, poor methodological design of small studies, poor choice of effect measures, and chance are also potential sources of asymmetry.2Egger M. Davey Smith G. Schneider M. Minder C. Bias in meta-analysis detected by a simple, graphical test.BMJ. 1997; 315: 629-634Crossref PubMed Scopus (30877) Google Scholar In the presence of publication bias, the meta-analysis results will not reflect reality. If we could retrieve all the missing studies, then the direction or the statistical significance of the results might be different.1Boutron I. Page M.J. Higgins J.P.T. Altman D.G. Lundh A. Hróbjartsson A. Chapter 7. Considering bias and conflicts of interest among the included studies.in: Higgins J.P.T. Thomas J. Chandler J. Cumpston M. Li T. Page M.J. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane, Chichester, United Kingdom2019Crossref Scopus (97) Google Scholar Approaches to investigate the robustness of the summary effect size have been proposed by Rosenthal and Orwin.7Rosenthal R. The file drawer problem and tolerance for null results.Psychol Bull. 1979; 86: 638-641Crossref Scopus (5017) Google Scholar,8Orwin R.G. A fail-safe N for effect size in meta-analysis.J Educ Behav Stat. 1983; 8: 157-159Crossref Google Scholar These approaches are presented very briefly below. According to Rosenthal, we could estimate the number of (additional 'negative') studies needed to be retrieved and included in the meta-analysis to increase the P value for the meta-analysis above 0.05.7Rosenthal R. The file drawer problem and tolerance for null results.Psychol Bull. 1979; 86: 638-641Crossref Scopus (5017) Google Scholar If we need only a small number of studies, we should be concerned about the robustness of the summary effect size. This approach is known as Fail-safe N. Orwin's approach is an extension of Rosenthal's strategy: researchers can determine the number of missing studies needed to bring the mean effect size below a specific value other than 0 (for continuous measures) or 1 (for ratio measures).8Orwin R.G. A fail-safe N for effect size in meta-analysis.J Educ Behav Stat. 1983; 8: 157-159Crossref Google Scholar This specific value would be an effect size that indicates a minimum clinically important difference between the compared interventions. The researchers could also specify the mean effect size in the missing studies to be any value other than 0 or 1. Two popular approaches to judge the impact of publication bias on the conclusions drawn from the meta-analysis results are the trim-and-fill approach and the Copas probabilistic model.9Duval S. Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.Biometrics. 2000; 56: 455-463Crossref PubMed Scopus (7269) Google Scholar,10Copas J. What works?: selectivity models and meta-analysis.J R Stat Soc A. 1999; 162: 95-109Crossref Scopus (118) Google Scholar A brief description of these approaches is provided below. A method known as "trim and fill" is implemented to understand the impact of publication bias.9Duval S. Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.Biometrics. 2000; 56: 455-463Crossref PubMed Scopus (7269) Google Scholar This method is an iterative procedure that identifies and corrects the asymmetry in the funnel plot. It removes the small studies with the most extreme results (trim) and recalculates the summary effect size at each iteration until the funnel plot becomes symmetric.9Duval S. Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.Biometrics. 2000; 56: 455-463Crossref PubMed Scopus (7269) Google Scholar Then, the removed studies are added back into the analysis, and a "mirror value" is computed for each one (fill). The output of this method is an "adjusted" funnel plot that depicts both the observed and the imputed studies so that the researcher can see how much the summary effect size changes when the imputed studies are included (trivially, modestly, or substantially).9Duval S. Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.Biometrics. 2000; 56: 455-463Crossref PubMed Scopus (7269) Google Scholar Another approach to assess the impact of publication bias is to use probabilistic models. Copas10Copas J. What works?: selectivity models and meta-analysis.J R Stat Soc A. 1999; 162: 95-109Crossref Scopus (118) Google Scholar examined the relationship between the standard error of a study and the probability that this study will be included in a meta-analysis. A sensitivity analysis is performed to estimate the summary effect sizes under a range of assumptions about the probability of publication. Then the researcher can explore how the estimated summary effect sizes and their confidence intervals vary across the different scenarios assumed in the sensitivity analysis and determine the impact of publication bias in the results.

Referência(s)