Artigo Acesso aberto Revisado por pares

The use and abuse of meta‐analysis

2020; Wiley; Volume: 55; Issue: 6 Linguagem: Inglês

10.1002/uog.22060

ISSN

1469-0705

Autores

Alexandros Sotiriadis, Christos Chatzakis, Anthony Odibo,

Tópico(s)

Neonatal Respiratory Health Research

Resumo

Ultrasound in Obstetrics & GynecologyVolume 55, Issue 6 p. 719-723 OpinionFree Access The use and abuse of meta-analysis A. Sotiriadis, Corresponding Author A. Sotiriadis [email protected] orcid.org/0000-0003-0876-5596 Second Department of Obstetrics and Gynecology, Faculty of Medicine, Aristotle University of Thessaloniki, Thessaloniki, GreeceCorrespondence. (e-mail: [email protected])Search for more papers by this authorC. Chatzakis, C. Chatzakis orcid.org/0000-0002-5895-6887 Second Department of Obstetrics and Gynecology, Faculty of Medicine, Aristotle University of Thessaloniki, Thessaloniki, GreeceSearch for more papers by this authorA. O. Odibo, A. O. Odibo Department of Obstetrics and Gynecology, University of South Florida Morsani College of Medicine, Tampa, FL, USASearch for more papers by this author A. Sotiriadis, Corresponding Author A. Sotiriadis [email protected] orcid.org/0000-0003-0876-5596 Second Department of Obstetrics and Gynecology, Faculty of Medicine, Aristotle University of Thessaloniki, Thessaloniki, GreeceCorrespondence. (e-mail: [email protected])Search for more papers by this authorC. Chatzakis, C. Chatzakis orcid.org/0000-0002-5895-6887 Second Department of Obstetrics and Gynecology, Faculty of Medicine, Aristotle University of Thessaloniki, Thessaloniki, GreeceSearch for more papers by this authorA. O. Odibo, A. O. Odibo Department of Obstetrics and Gynecology, University of South Florida Morsani College of Medicine, Tampa, FL, USASearch for more papers by this author First published: 01 June 2020 https://doi.org/10.1002/uog.22060Citations: 2AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL The method of pooling data from different studies to investigate an outcome of interest was first used in 1904 by Karl Pearson, in an article on typhoid vaccine studies1. However, the term ‘meta-analysis’ was not coined until 1976 when it was used to describe the ‘analysis of analyses’ as opposed to the primary analysis of original data obtained in a single research study or the secondary reanalysis of original data, using different statistical methods or exploring new outcomes2. Gradually, meta-analysis was adopted widely across diverse disciplines and it acquired prominence and influence in many fields, including obstetrics and gynecology. An early example of a milestone meta-analysis in our field is the one on antenatal steroids. In the 1970s and 80s, the findings of the studies on antenatal corticosteroids were conflicting and thus the obstetric community was reluctant to embrace them. This changed in 1990, when Crowley et al.3 performed a formal statistical synthesis of data from controlled trials to show that antenatal corticosteroids reduce the risk for respiratory distress syndrome, and suggested that the observed variation in findings between the studies might be due to the different clinical characteristics of their participants. One of the plots from this study has been the basis for the logo of the Cochrane Collaboration, and the report highlights the two main purposes of a meta-analysis: (1) synthesis of the available evidence on a given topic and (2) exploration and explanation of heterogeneity between studies. Uses of meta-analysis A meta-analysis aims to synthesize available data on a topic of interest in a transparent, inclusive, structured and analytical way4, 5. At the level of the individual reader this can be, in principle, valuable given the current information overload6 and a shortening attention span. On a broader scale, assessment of the risk of bias of the included studies and of the overall quality of the evidence, which is an integral part of modern meta-analysis, allows available evidence to be distilled into a recommendation, which is a necessary building block for the development of clinical guidelines7, 8. As with any tool, a meta-analysis performs optimally when certain conditions are met9, 10. Meta-analysis performs best when it combines data from multiple randomized controlled trials (RCTs) of similar design, size and background. On the other hand, when the number of included studies is too small or the data are too dissimilar, then a single pooled estimate may be less useful; in these cases, exploration of heterogeneity is the main target and this is particularly important in the case of observational studies. Limitations of meta-analysis The limitations of meta-analysis can be intrinsic, when arising from the tool itself, or extrinsic, when arising from the authors (and, to a lesser extent, readers) misusing the method. Dependence on quality of primary data In principle, the quality of a product is heavily dependent on the quality of its ingredients. Therefore, the credibility of the results of a meta-analysis is directly associated with the quality of the studies it synthesizes. This is one of the first reservations voiced against meta-analyses, under the motto ‘garbage in – garbage out’11. However, this weakness may actually be the main strength of a meta-analysis. A good meta-analysis is preceded by a systematic review of the literature, which, ideally, allows a formal and meticulous assessment of the potential flaws of the studies considered for inclusion. Several standardized tools have been developed for assessment of the risk of bias in studies, and some are widely used, such as the Cochrane risk of bias tool12. Another significant development has been the introduction of the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) system for assessing the quality of evidence and degree of confidence in the results in systematic reviews13. It is crucial that authors are transparent and objective about the quality of the evidence included in a meta-analysis. Presenting mixed-quality or bad-quality data under the luxury package of a meta-analysis without acknowledging the quality of the data is a practice similar to when investment banks packaged subprime loans and sold them as investment assets. The latter practice contributed to the financial crisis of 2008; a similar practice in research-data synthesis could lead to a credibility crisis in medicine. However, it is unfair to blame systematic reviews and meta-analyses for the fact that subpar studies exist. In fact, systematic review may be the best method to systematically deal with low-quality studies in the literature. A recent commentary in The Lancet highlighted the concern that false and fabricated data in primary studies may be more common than we think; inclusion of such studies in a meta-analysis may produce misinformed findings and eventually mislead clinical practice14. A meta-analysis of survey data reported that 2% of participating scientists admitted having fabricated or falsified data themselves; however, 14% reported knowing someone else who did so, and 72% that they knew someone who used questionable research practices15. There is no easy solution to this matter. Proposed actions include adopting strict and restricting eligibility criteria, such as pooling together only prospectively registered, low-bias studies14; however, this practice would eliminate more than 90% of the existing literature and many of the excluded studies would still have some value. Moreover, no matter how suspicious a published study may seem, it can be impossible to prove that it is fabricated since it passed successfully the quality control of peer review. Synthesis of dissimilar data Another early reservation expressed about meta-analyses relates to what is often described as ‘comparing apples with oranges’11. A common misconception is that a meta-analysis should combine data only from RCTs with characteristics so similar that it could be hypothesized that they are random samples of the same population and have a common underlying effect size. This is the principle behind the fixed-effect model, which assumes that the observed deviations from this common effect size are due only to sampling error. In reality, it is common practice in a meta-analysis to pool together dissimilar studies, and this can artificially dilute or enhance the actual effects. However, even if one compares apples with apples, it is doubtful whether even two apples are comparable. Almost no two studies are the same, and there are many reasons for this, such as researchers considering belittling to replicate the design of previous studies, and funders pushing for novel and innovating studies. Only a few studies are truly innovative, while the great majority of studies are similar, i.e. neither identical nor different5, 16. An appropriately performed meta-analysis uses valid statistical methods to combine different types of apples and oranges (e.g. random-effects models) and this is the best way to understand how different these fruits are. The alternative type of study is the non-quantitative expert opinion, which is known to be heavily biased. Overuse, misuse and abuse of meta-analyses Overuse There are simply too many meta-analyses in the literature. Ten years after the launching of Cochrane Collaboration, it was calculated that about 10 000 regularly updated Cochrane reviews would cover adequately all studies published until 2003 relevant to the entire field of healthcare17. In comparison, 12 536 articles published in 2018 were tagged as meta-analyses in PubMed, and the corresponding number in 2019 was 21 423. Are all these new meta-analyses informative? It appears that this is not usually the case. There is a great degree of overlap between published meta-analyses, and an empirical study, published in 2010, of 73 randomly selected meta-analyses showed that two-thirds of them overlapped with at least one other meta-analysis, and their results were often similar or identical18. The corrected covered area (CCA) is a metric that uses a citation matrix of all primary studies (rows) included in each review (columns) to produce a measure of overlap, i.e. the extent to which the primary studies in the reviews are the same or different19. Scores higher than 15% indicate significant overlap20. As an example, this method was used to evaluate overlap of publications regarding the use of non-vitamin K oral anticoagulants for atrial fibrillation, and showed that there were significantly more systematic reviews (n = 57) than RCTs (n = 14) on this topic, yielding a very high CCA value of 24%. We calculated this metric for two important topics in our field: (1) the use of prophylactic progesterone for the prevention of preterm birth in singleton pregnancies with a short cervix or a history of preterm birth (RCTs only); and (2) the use of low-dose aspirin for the prevention of pre-eclampsia in high-risk patients (RCTs only). The methods and results of this analysis are presented in detail in Appendix S1. The CCA for use of prophylactic progesterone in preterm birth is 21.02%, indicating a very high overlap, and the CCA for use of aspirin for the prevention of pre-eclampsia is even greater at 31.2%. Misuse Although there are many methodological pitfalls that can result in a suboptimal meta-analysis, we would like to highlight four domains. Pooling inadequate data: there is no set rule regarding the minimum number of studies that should be combined in a meta-analysis. A systematic review can be performed even when no relevant studies are identified through a thorough search, as demonstrating that no studies are available on a topic is informative in itself, and a meta-analysis can be done with only two studies. However, it can be more difficult to estimate statistical heterogeneity when data are limited. Moreover, as the number of combined studies becomes smaller, the importance of their precision (i.e. their size) increases21. Inappropriate selection of studies: one of the first purposes for which meta-analysis was conceived was to ensure transparency and inclusiveness when pooling together data from different studies. In contrast to a narrative review, in which the authors may focus on studies of their choice and lead the discussion in a direction they prefer, inclusion in a systematic review (and meta-analysis) of all available data on a topic of interest would theoretically impose objectivity in study selection. In practice, however, this can be manipulated through manipulation of the selection criteria. The risk of manipulation is greater when the protocol of a systematic review has not been registered prospectively and the selection criteria are modified post hoc, after the identification of studies or, even worse, after calculation of the pooled estimates. Inappropriate selection of outcomes: even when everything else is done properly, the authors can lead the discussion of their meta-analysis in a direction convenient to them by choosing to highlight outcomes that better suit their interests (or those of the funders) rather than those of the patients22. This is now a recognized form of bias called ‘outcome reporting bias’23. An extension of this phenomenon is the use of composite outcomes, which may be useful in primary studies in which each of the components of the composite outcome is rare. However, in meta-analyses that overcome the limitation of small sample size and power by combining data from multiple studies, analysis of composite outcomes can only be meaningful if their components are of similar clinical significance; otherwise, their interpretation can be misleading and this is especially true when the composite outcome includes death21. In general, outcome problems are common in the primary studies considered in a meta-analysis. In this context, a meta-analysis may offer an opportunity to improve markedly the findings of primary studies, by focusing on and using more rigorous and complete information on specific outcomes. Inappropriate reporting of effect size: as demonstrated in a commonly cited example, it is ‘catchier’ to say that an intervention or exposure A increases the risk of an event B by three times, rather than being pragmatic and say that an intervention or exposure A increases the risk of an event B from 1:1000 to 3:1000. This bias applies to all clinical research, not just meta-analyses. Abuse The mass production of meta-analyses can sometimes be driven by commercial incentives, when they are used as marketing tools by the industry. This phenomenon is rather rare in our field, in which prescription of chronic or expensive drugs is uncommon, but can be profound in other fields of medicine, with antidepressants being a striking example5. Between 2007 and 2014, 185 meta-analyses were published on antidepressants. Of these, 29% had at least one author who was employed by the industry, and 79% had some industry link. Not surprisingly, meta-analyses that included industry employees in the authorship, or were sponsored by drug companies, were significantly less likely to report caveats for antidepressants24. One of the proposed solutions to deal with the wide extent of this problem was to exclude from the authorship of systematic reviews and meta-analyses people who have a stake in the results, and this ban should include industry employees but also content experts25. Current situation In an empirical overview of the current state of meta-analysis, it was estimated that, about 20% of meta-analyses never get published, whether intentionally or not. Of those that get published, about one in six are meta-analyses on genetic associations, in which results are largely misleading because they are based on abandoned methodology, and about one-third are redundant meta-analyses of other research topics. Of the remaining, about half have serious methodological flaws and many others are methodologically decent but non-informative. Consequently, good and truly informative meta-analyses represent a small minority of the total, probably less than 5% of those originally written5. What can be done to improve the quality of meta-analysis Meta-analysis should not be abandoned because of its inherent and imposed limitations. In contrast, meta-analysis is a valuable tool to critically appraise and summarize evidence, when the latter is good, and to highlight its limitations, when it is not5. Many papers have described what makes a good meta-analysis4, 26-29, some of which have proposed tools to assess their quality27-29. Møller et al.4 highlight 12 domains that should be considered by authors when designing and conducting a systematic review and meta-analysis. We would like to focus on four of them. Relevant question It may seem self-evident, but a meta-analysis should address a clinically and scientifically relevant question rather than simply combine a group of studies assessing the same treatment just because they are available30. Moreover, this question should be amenable to meta-analysis; having a relevant question does not necessarily mean that a meta-analysis is the appropriate tool to answer it. The type of question very commonly corresponds to the type of available studies, and certain types of studies (e.g. well-conducted RCTs) are better suited to meta-analysis than others (e.g. small observational studies). Prospective registration of protocol Prospective registration, or even publication, of the protocol of a meta-analysis serves two main purposes: (i) ensures transparency about the methods and outcomes of the study, so as to avoid selective reporting of outcomes or opportunistic post-hoc analyses, and (ii) prevents duplicate meta-analyses. PROSPERO, which was launched in 2011, is an established registry for the prospective registration of systematic reviews31, and was developed to address these issues. Authors registering protocols can see whether there are already similar meta-analyses and avoid duplication32, and there is some evidence that protocols registered in PROSPERO have a higher AMSTAR (A MeaSurement Tool to Assess systematic Reviews) score33. Anticipation and management of heterogeneity There are three types of heterogeneity, namely clinical diversity, methodological diversity and statistical heterogeneity30, and their combination culminates in what we could call conceptual heterogeneity. Conceptual heterogeneity can be assessed by taking a step back after we have gathered the evidence and evaluating whether it makes sense to combine it, not just from a statistical but also from a common-sense point of view. While statistical heterogeneity alone is not a sufficient reason to perform or abort a meta-analysis, it should be acknowledged, explored and explained when possible34. Subgroup analyses and meta-regression may be performed, the latter when there is a sufficient number of studies for each study-level variable, and sensitivity analysis may confirm the robustness of the results when only studies with low risk of bias are included30. If these analyses show different results, they should be presented and discussed separately. Ultimately, if the segmentation of the evidence leads to inconsistent results from low-scale fragmented data, then the authors should question the rationale of quantitative synthesis. Assessment of overall quality of evidence Assessment of the quality of the overall body of evidence that has been synthesized in a meta-analysis is broader than assessing the risk of bias in each of the included studies12 and is different from assessing the quality of the meta-analysis itself. Quality assessment is seen as an unnecessary burden by many authors, as the GRADE scoring system13, 35-40 involves a significant workload and can be punitive if implemented properly and honestly. Indeed, RCTs start off as high-quality evidence and there are five reasons (domains) to downgrade them, and observational studies start off as low-quality studies and there are five reasons to further downgrade them and three reasons to upgrade them (which happens very rarely)13. Having undertaken an otherwise pristine meta-analysis, some authors facing a low or very-low overall quality score as per GRADE (which would limit their chances of getting published in a major journal) would contemplate fiddling with the GRADE components to achieve a more favorable score. Aside from the moral dimension, this attitude would be scientifically dangerous. This is because the overall quality of the evidence reflects our confidence in the results; high-quality evidence means that we are pretty certain that the study findings lie close to the truth, whereas very-low quality evidence means that it is likely that the result may be more uncertain. A meticulous and honest assessment of the overall quality of evidence is a key element of a meta-analysis. A call for high-quality meta-analyses Meta-analysis can be a valuable tool in evidence-based medicine. We believe that the recent scepticism for meta-analysis is mostly a result of its overuse and misuse rather than of its inherent limitations. As it is quite important that we do not void this tool by misusing it, we would like to make a call for high-quality research, which, in the case of meta-analysis, should: address relevant questions; not all questions are important, and not all important questions can be answered by meta-analysis; combine data that can be combined, preferably from RCTs or, even better, individual patient data; let us not forget that the level of evidence in the primary research is transferred into the resulting meta-analyses; arise from scientific collaborations; be registered prospectively, to avoid duplication and increase transparency; be appropriately analyzed; the outcomes should reflect what is important for the patients and the methods of analysis should reflect both the absolute and relative effect, in the case of interventions; and, finally, be interpreted objectively and truthfully; this is last but far from least, as unjustified certainty can ultimately mislead clinical practice. Such high-quality meta-analyses shall be given priority in Ultrasound in Obstetrics & Gynecology so that their findings can be disseminated rapidly and broadly, and clinical practice impacted accordingly. Supporting Information Filename Description uog22060-sup-0001-AppendixS1.docxWord 2007 document , 90.7 KB Appendix S1 Methods and results for assessment of overlap between publications on use of progesterone for prevention of preterm birth and publications on use of low-dose aspirin for prevention of pre-eclampsia Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article. REFERENCES 1Pearson K. Report on Certain Enteric Fever Inoculation Statistics. Br Med J 1904; 2: 1243– 1246. 2Crowley PA. Antenatal corticosteroid therapy: a meta-analysis of the randomized trials, 1972 to 1994. Am J Obstet Gynecol 1995; 173: 322– 335. 3Crowley P, Chalmers I, Keirse MJ. The effects of corticosteroid administration before preterm delivery: an overview of the evidence from controlled trials. Br J Obstet Gynaecol 1990; 97: 11– 25. 4Møller MH, Ioannidis JPA, Darmon M. Are systematic reviews and meta-analyses still useful research? We are not sure. Intensive Care Med 2018; 44: 518– 520. 5Ioannidis JPA. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q 2016; 94: 485– 514. 6Ioannidis JPA, Boyack KW, Klavans R. Estimates of the continuously publishing core in the scientific workforce. PLoS One 2014; 9: e101698. 7Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ, GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336: 924– 926. 8Van Wely M. The good, the bad and the ugly: Meta-analyses. Hum Reprod 2014; 29: 1622– 1626. 9Petitti DB. Meta-Analysis, Decision Analysis and Cost-Effectiveness Analysis (2nd edn). Oxford University Press: New York Oxford, 2000. 10Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-Analysis. John Wiley & Sons, Ltd: Chichester, UK, 2009. 11Wachter KW. Disturbed by meta-analysis? Science 1988; 241: 1407– 1408. 12Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT. RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ 2019; 366: 1– 8. 13Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, Debeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction - GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 2011; 64: 383– 394. 14Horton R. Offline: The gravy train of systematic reviews. Lancet 2019; 394: 1790. 15Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One 2009; 4: e5738. 16Iqbal SA, Wallach JD, Khoury MJ, Schully SD, Ioannidis JPA. Reproducible Research Practices and Transparency across the Biomedical Literature. PLoS Biol 2016; 14: e1002333. 17Mallett S. How many Cochrane reviews are needed to cover existing evidence on the effects of healthcare interventions? Evid Based Med 2003; 8: 100– 101. 18Siontis KC, Hernandez-Boussard T, Ioannidis JPA. Overlapping meta-analyses on the same topic: Survey of published studies. BMJ 2013; 347: 1– 11. 19Hennessy EA, Johnson BT. Examining overlap of included studies in meta-reviews: Guidance for using the corrected covered area index. Res Synth Methods 2020; 11: 134– 145. 20Pieper D, Antoine SL, Mathes T, Neugebauer EAM, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol 2014; 67: 368– 375. 21Lau J, Terrin N, Fu R. Expanded Guidance on Selected Quantitative Synthesis Topics. In Methods Guide for Effectiveness and Comparative Effectiveness Reviews. Agency for Healthcare Research and Quality: Rockville, MD, USA, 2008. 22Ioannidis JPA. Effectiveness of antidepressants: An evidence myth constructed from a thousand randomized trials? Philos Ethics Humanit Med 2008; 3: 1– 9. 23Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS One 2013; 8: e66844. 24Ebrahim S, Bance S, Athale A, Malachowski C, Ioannidis JPA. Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. J Clin Epidemiol 2016; 70: 155– 163. 25Gøtzsche PC, Ioannidis JPA. Content area experts as authors: helpful or harmful for systematic reviews and meta-analyses? BMJ 2012; 345: e7031. 26Dekkers OM. Meta-analysis: Key features, potentials and misunderstandings. Res Pract Thromb Haemost 2018; 2: 658– 663. 27Higgins JPT, Lane PW, Anagnostelis B, Anzures-Cabrera J, Baker NF, Cappelleri JC, Haughie S, Hollis S, Lewis SC, Moneuse P, Whitehead A. A tool to assess the quality of a meta-analysis. Res Synth Methods 2013; 4: 351– 366. 28Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 2007; 7: 1– 7. 29Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA. AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017; 358: 1– 9. 30Fu R, Gartlehner G, Grant M, Shamliyan T, Sedrakyan A, Wilt TJ, Griffith L, Oremus M, Raina P, Ismaila A, Santaguida P, Lau J, Trikalinos TA. Conducting quantitative synthesis when comparing medical interventions: AHRQ and the Effective Health Care Program. J Clin Epidemiol 2011; 64: 1187– 1197. 31Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, Stewart L. The nuts and bolts of PROSPERO: An international prospective register of systematic reviews. Syst Rev 2012; 1: 2. 32Moher D, Booth A, Stewart L. How to reduce unnecessary duplication: Use PROSPERO. BJOG 2014; 121: 784– 786. 33Sideri S, Papageorgiou SN, Eliades T. Registration in the international prospective register of systematic reviews (PROSPERO) of systematic review protocols was associated with increased review quality. J Clin Epidemiol 2018; 100: 103– 110. 34Ioannidis JP, Patsopoulos NA, Rothstein HR. Reasons or excuses for avoiding meta-analysis in forest plots. BMJ 2008; 336: 1413– 1415. 35Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, Guyatt GH. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol 2011; 64: 401– 406. 36Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, Montori V, Akl EA, Djulbegovic B, Falck-Ytter Y, Norris SL, Williams JW, Atkins D, Meerpohl J, Schünemann HJ. GRADE guidelines: 4. Rating the quality of evidence - Study limitations (risk of bias). J Clin Epidemiol 2011; 64: 407– 415. 37Guyatt GH, Oxman AD, Montori V, Vist G, Kunz R, Brozek J, Alonso-Coello P, Djulbegovic B, Atkins D, Falck-Ytter Y, Williams JW, Meerpohl J, Norris SL, Akl EA, Schünemann HJ. GRADE guidelines: 5. Rating the quality of evidence - Publication bias. J Clin Epidemiol 2011; 64: 1277– 1282. 38Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Glasziou P, Jaeschke R, Akl EA, Norris S, Vist G, Dahm P, Shukla VK, Higgins J, Falck-Ytter Y, Schünemann HJ. GRADE guidelines: 7. Rating the quality of evidence - Inconsistency. J Clin Epidemiol 2011; 64: 1294– 1302. 39Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Falck-Ytter Y, Jaeschke R, Vist G, Akl EA, Post PN, Norris S, Meerpohl J, Shukla VK, Nasser M, Schünemann HJ. GRADE guidelines: 8. Rating the quality of evidence - Indirectness. J Clin Epidemiol 2011; 64: 1303– 1310. 40Guyatt GH, Oxman AD, Sultan S, Glasziou P, Akl EA, Alonso-Coello P, Atkins D, Kunz R, Brozek J, Montori V, Jaeschke R, Rind D, Dahm P, Meerpohl J, Vist G, Berliner E, Norris S, Falck-Ytter Y, Murad MH, Schünemann HJ. GRADE guidelines: 9. Rating up the quality of evidence. J Clin Epidemiol 2011; 64: 1311– 1316. Citing Literature Volume55, Issue6June 2020Pages 719-723 ReferencesRelatedInformation

Referência(s)