Artigo Acesso aberto Revisado por pares

Evolution in statistics: P values, statistical significance, kayaks, and walking trees

2020; American Physical Society; Volume: 44; Issue: 2 Linguagem: Inglês

10.1152/advan.00054.2020

ISSN

1522-1229

Autores

Douglas Curran‐Everett,

Tópico(s)

Statistical Methods and Inference

Resumo

EditorialEvolution in statistics: P values, statistical significance, kayaks, and walking treesDouglas Curran-EverettDouglas Curran-EverettDivision of Biostatistics and Bioinformatics, National Jewish Health, Denver, ColoradoDepartment of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Denver, Denver, ColoradoPublished Online:15 May 2020https://doi.org/10.1152/advan.00054.2020MoreSectionsPDF (138 KB)Download PDF ToolsExport citationAdd to favoritesGet permissionsTrack citations ShareShare onFacebookTwitterLinkedInWeChat The scientific literature is littered with the adjective significant and the descriptive phrase statistically significant. Established members of the statistical community now recommend that a scientific paper simply report an actual P value divorced from any word or phrase that reflects statistical significance (Hurlbert SH, Levine RA, Utts J. Am Stat 73, Suppl 1: 352–357, 2019). In this editorial, I illustrate why this deceptively simple change is important.A Brief History of Hypothesis Tests, P Values, and SignificanceEarly hypothesis tests, from the Trial of the Pyx in 1279 through the assessment of a discrepant celestial measurement in the 1700s, mandated a binary outcome: the measurement of weight or position either was or was not within some allowable deviation (see Ref. 6).Between the 1800s and the early 1900s, the focus of a hypothesis test shifted from whether the measurement of a coin or a star was within an allowable deviation to whether some event in mathematics or science could be attributed to chance alone, but the outcome of that hypothesis test remained a binary one (see Ref. 6). If some event—if some difference—was unlikely to have resulted from chance alone, then Edgeworth described that difference as significant, very significant, or as significant and not accidental (15). Thirty-five years later, Boring (2) warned about interpreting a significant difference in the absence of scientific context.In his pivotal Statistical Methods for Research Workers (16), Sir Ronald Fisher also used significant when he discussed the magnitude of a deviation he regarded as beyond chance, and he defined 0.05 as the benchmark for when he considered some deviation as significant:The value for which P = ·05, or 1 in 20, is 1·96 or nearly 2; it is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant.In the intervening 100 yr, Fisher's significance level of 0.05 assumed mythic proportions despite subsequent but less visible elaborations by Fisher himself:If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent. point), or one in a hundred (the 1 per cent. point). Personally, the writer prefers to set a low standard of significance at the 5 per cent. point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance.[Ref. 17 (1926)]The attempts that have been made to explain the cogency of tests of significance in scientific research . . . seem to miss the essential nature of such tests. A [person] who "rejects" a hypothesis provisionally, as a matter of habitual practice, when the significance is at the 1% level or higher, will certainly be mistaken in not more than 1% of such decisions. For when the hypothesis is correct he will be mistaken in just 1% of these cases, and when it is incorrect he will never be mistaken in rejection. . .. However, the calculation is absurdly academic, for in fact no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. Further, the calculation is based solely on a hypothesis, which, in the light of the evidence, is often not believed to be true at all, so that the actual probability of erroneous decision, supposing such a phrase to have any meaning, may be much less than the frequency specifying the level of significance.[Ref. 18 (1956)]Recent suggestions that the critical significance level α—the benchmark for how statistically unusual a result must be before we reject the corresponding null hypothesis—be set at 0.005 or 0.001 (23, 25) minimize the chance that we get a false positive and improve reproducibility (see Ref. 10), but a lower, more stringent significance level α, by itself, fails to address the binary nature of the benchmark (1).Suppose we define beforehand α = 0.05 and then obtain, using actual data, P = 0.051. Is our scientific conclusion going to differ from our conclusion had P = 0.049? I hope not. The same logic applies had we defined α = 0.001: does 0.0011 truly differ from 0.0009? No.One strategy that circumvents this problem with a binary benchmark1 is to report the actual P value rather than simply P < 0.05 or P > 0.05 (see Refs. 12 and 14). This strategy, however, has met with limited success in journals published by the American Physiological Society (APS): guidelines for reporting statistics (12) had virtually no impact on the occurrence of actual P values (11, 13); see Table 1.Table 1. American Physiological Society journal manuscripts in 2003 and 2006: reporting of statisticsnP Value2003200620032006Am J Physiol Cell Physiology303221313 Endocrinol Metab283023930 Gastrointest Liver Physiol282721417 Heart Circ Physiol626271920 Lung Cell Mol Physiol262611918 Regul Integr Comp Physiol293844127 Renal Physiol25289417*J Appl Physiol575192634J Neurophysiol616993038For each journal, values are n, the no. of manuscripts reviewed, and the percentage of manuscripts that reported actual P values (for example, P = 0.02 rather than P < 0.05 or P = 0.11 rather than P > 0.05). From August 2003 through July 2004, these journals published a total of roughly 3,500 original articles; the number of articles reviewed represents a 10% sample (systematic random sampling, fixed start) of the original articles published by each journal. From August 2005 through July 2006, these journals published a total of 3,675 original articles; the number of articles reviewed represents a complete survey of the original articles published by each journal.*Italicized pair indicates the percentage in 2003 was less than the percentage in 2006 (P = 0.06, exact binomial test, 1-tailed). [Adapted from Table 1 in Ref. 13.]Although the P value associated with the statistical test of some null hypothesis is useful—it helps guard against an unwarranted conclusion, or it helps argue for a real experimental effect (3, 24)—the only question it can answer is a trivial one: is there anything other than random variation going on here? Statisticians have long warned against a sole focus on hypothesis tests and their resultant P values (see Refs. 6 and 14).The Vagaries of P ValuesAs a thought experiment I have posed before (see Refs. 9 and 10), suppose we want to learn if some intervention affects the biological thing we care about. If we use two groups—a control group and an experimental group—we might ask if our samples came from the same or different populations. Therefore, the null and alternative hypotheses, H0 and H1, are:H0: The samples come from the same population.H1: The samples come from different populations.If we want to know whether the populations have the same mean, we can write these asH0:Δμ=0H1:Δμ≠0,where Δμ, the difference in population means, is the difference between the means of the experimental and control populations.We know from previous simulations in which the null hypothesis H0: Δμ = 0 is true (see Refs. 9 and 10) that the observed P values that result from the test of this null hypothesis are distributed over a wide range of values (Fig. 1): only 100α% of them will be smaller than the critical significance level α.Fig. 1.The distribution of observed P values from 100,000 replications of a simulation in which the null hypothesis, H0: Δμ = 0, is true. We drew at random two samples, each with n observations, from a standard normal distribution (top left; see Refs. 4 and 6), did a two-sample t test (using a critical significance level α = 0.05), and then repeated this process to generate a total of 100,000 replications. In 5% of the replications (gray), P < 0.05: that is, we reject a true null hypothesis (see Ref. 6). The proportion of false positives depends solely on the critical significance level α. [From Ref. 9, Fig. 1.]Download figureDownload PowerPointWhat may be less obvious is that the observed P values from the simulated test of a false null hypothesis (Fig. 2) are distributed also over a wide range of values (9, 10, 21); see Fig. 3 and Table 2. In this situation, it is the power of the statistical test (see Ref. 7) that determines the proportion of observed P values that will be smaller—and larger—than the critical significance level α.Fig. 2.The populations. Population 0 is a standard normal distribution with mean μ0 = 0 and standard deviation σ0 = 1. Population 1 is a normal distribution with mean μ1 = 1 and standard deviation σ1 = 1. Therefore, the true difference in population means, Δμ, is μ1 − μ0 = 1, and the effect size is Δμ/σ = 1.Download figureDownload PowerPointFig. 3.The distributions of observed P values from 100,000 replications of simulations in which the null hypothesis, H0: Δμ = 0, is false. We drew at random two samples, each with 10 (top) or 23 (bottom) observations, from the normal distributions in Fig. 2, did a two-sample t test (using a critical significance level α = 0.05), and then repeated this process to generate a total of 100,000 replications. The proportion of replications in which P was less than 0.0001, 0.001, 0.01, or 0.05 is listed in the upper portion of each graph.Download figureDownload PowerPointTable 2. Percentiles of observed P values when the null hypothesis is falsePercentilenPower2.5255056759197.5100.560.00020.0070.040.050.130.370.72230.91<0.00010.00010.0010.0020.010.050.16Values are n, the no. of observations drawn from each population in Fig. 2; theoretical power of the 2-sample t test used to evaluate the null hypothesis H0: Δμ = 0; and percentiles of the distributions of observed P values depicted in Fig. 3. When power is 0.91, 9% of the observed P values are greater than α = 0.05.The dichotomy between statistical significance and scientific importance can be illustrated by taking progressively larger numbers of observations from each population in Fig. 2 (Table 3): statistical significance increases (P decreases), but the scientific importance as estimated by Δy¯, the difference between sample means, remains constant.Table 3. Limitations of statistical significancenΔy¯SE {Δy¯}dftP95% CICI Width210.53121.8830.20−1.28 to +3.284.56410.29463.3990.010.28 to 1.721.44810.528141.8940.08−0.13 to +2.132.261010.364182.7450.010.23 to 1.771.541510.384282.6040.010.21 to 1.791.582010.396382.5230.020.20 to 1.801.602310.282443.5500.00090.43 to 1.571.142510.239484.1800.00010.52 to 1.480.963210.220624.5430.000030.56 to 1.440.88Values are n, the no. of observations drawn from each population in Fig. 2; Δy¯, the difference between sample means; SE {Δy¯}, the standard error of the difference between sample means; df, degrees of freedom; t, the test statistic used to evaluate the null hypothesis, H0: Δμ = 0; P, the probability associated with t and the corresponding df; 95% CI, 95% confidence interval for Δμ, the difference between population means; CI Width, width of the 95% CI for Δμ. The difference Δy¯ and the corresponding 95% CI for Δμ, the difference between population means, reflect the magnitude and uncertainty of the observed results. The test statistic t and its associated P value reflect statistical significance. As the number of observations drawn from each population increases, Δy¯, the estimated difference between population means, remains constant by virtue of the sampling process, but SE {Δy¯} decreases. As a consequence, the statistical significance increases (irregularly, because of random sampling), the scientific impact as estimated by Δy¯ remains constant, and precision of the scientific impact as estimated by the width of the confidence interval increases. See Refs. 8 and 14 for additional detail about the 2-sample t test used in this simulation. [Adapted from Ref. 14; the sampling process is depicted in Fig. 5 of Ref. 14.]Evolving Beyond Statistical SignificanceIn 2016 the American Statistical Association (ASA) issued a position statement that discussed P values and the notion of statistical significance (27). In October 2017 the ASA held a 2-day symposium on statistical inference that resulted in a 43-paper volume of The American Statistician (26). One of the papers (Ref. 22) advocated that a scientific paper simply report an actual P value divorced from phrases that reflect statistical significance. In their editorial preface (26), the Editors endorsed this recommendation:A label of statistical significance adds nothing to what is already conveyed by the value of p . . ..This deceptively simple change affords benefits beyond what you might imagine. Dropping the word significant or the phrase statistically significant prevents a reflexive association of scientific importance with a mere statistical result. This is helpful for two reasons. First, it is quite possible to have a statistically convincing change that is of little or no scientific relevance (see Ref. 5). And second, a binary distinction between not significant and statistically significant can be associated with a trivial difference in the magnitude of the underlying estimate of some effect (20).To appreciate how straightforward this change is to implement, consider this portion of the abstract from a March 2020 APS Select paper (28):Male, but not female, collecting duct Bmal1 knockout (CDBmal1KO) mice had significantly lower 24-h mean arterial pressure (MAP) than flox controls (105 ± 2 vs. 112 ± 3 mmHg for male mice and 106 ± 1 vs. 108 ± 1 mmHg for female mice, by telemetry). After 6 days on a high-salt (4% NaCl) diet, MAP remained significantly lower in male CDBmal1KO mice than in male flox control mice (107 ± 2 vs. 113 ± 1 mmHg), with no significant differences between genotypes in female mice (108 ± 2 vs. 109 ± 1 mmHg). . . . . However, MAP remained lower in male CDBmal1KO mice than in male flox control mice (124 ± 2 vs. 130 ± 2 mmHg).This is how that portion of the abstract could have been written2 to align with the recommendation of Refs. 22 and 26:Male, but not female, collecting duct Bmal1 knockout (CDBmal1KO) mice had lower 24-h mean arterial pressure (MAP) than flox controls [105 vs. 112 mmHg for male mice (P = 0.00) and 106 vs. 108 mmHg for female mice (P = 0.00), by telemetry]. After 6 days on a high-salt (4% NaCl) diet, MAP remained lower in male CDBmal1KO mice than in male flox control mice [107 vs. 113 mmHg (P = 0.00)], with no differences between genotypes in female mice [108 vs. 109 mmHg (P = 0.00)]. . . . However, MAP remained lower in male CDBmal1KO mice than in male flox control mice [124 vs. 130 mmHg (P = 0.00)].Reference 22 gives more examples.In April 2019, Hurlbert, Levine, and Utts, the authors of Ref. 22, contacted the Editors-in-Chief of the journals published by the APS. In their e-mail they encouraged the Editors to purge the phrase statistically significant from the APS journals' future papers, and they argued that APS could spearhead this reform. The APS Publications Committee tabled discussion of this recommendation.In 1991 Ralph Fletcher wrote Walking Trees, a memoir of his experiences helping teachers in New York City schools learn how to teach writing (19). The title stems from a story Heather, a first-grader, wrote about a family trip to Florida:See, me and my mommy and daddy went to Florida, and we saw the walking trees down there. They walk with their roots. They take one step in a hundred years. [Significant pause.] We didn't see them walk while we were there.For Fletcher, Heather's trees became a metaphor for the terribly glacial rate of change in education and the Herculean effort required to make even the smallest progress. In my 2017 editorial (11), I wrote that trying to change the reporting practices of statistics was like trying to change the direction of an ocean liner with a kayak. Whether the metaphor is kayaks or walking trees, so it is with the use of statistics within science.In my 2017 editorial, I also announced that I had asked the Associate Editors and Editorial Board of Advances to actively promote two of the 2004 guidelines for reporting statistics (12). I wrote that I understood it was difficult to change entrenched practices and that I understood change was slow, but that did not mean we should not try. When the mainstream statistical community—virtually en masse—recommends a simple, specific course of action—report an actual P value without phrases that reflect statistical significance—the scientific community would do well to heed that recommendation.DISCLOSURESNo conflicts of interest, financial or otherwise, are declared by the author.AUTHOR CONTRIBUTIONSD.C.-E. analyzed data; prepared figures; drafted manuscript; edited and revised manuscript; approved final version of manuscript.ACKNOWLEDGMENTSI thank Ronald Wasserstein (Executive Director, American Statistical Association) for his time and for his helpful comments and suggestions.REFERENCES1. Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature 567: 305–307, 2019. doi:10.1038/d41586-019-00857-9. Crossref | PubMed | ISI | Google Scholar2. Boring EG. Mathematical vs. scientific significance. Psychol Bull 16: 335–338, 1919. doi:10.1037/h0074554.Crossref | Google Scholar3. Cox DR. Statistical significance tests. Br J Clin Pharmacol 14: 325–331, 1982. doi:10.1111/j.1365-2125.1982.tb01987.x. Crossref | PubMed | ISI | Google Scholar4. Curran-Everett D. Explorations in statistics: standard deviations and standard errors. Adv Physiol Educ 32: 203–208, 2008. doi:10.1152/advan.90123.2008. Link | ISI | Google Scholar5. Curran-Everett D. Explorations in statistics: confidence intervals. Adv Physiol Educ 33: 87–90, 2009. doi:10.1152/advan.00006.2009. Link | ISI | Google Scholar6. Curran-Everett D. Explorations in statistics: hypothesis tests and P values. Adv Physiol Educ 33: 81–86, 2009. doi:10.1152/advan.90218.2008. Link | ISI | Google Scholar7. Curran-Everett D. Explorations in statistics: power. Adv Physiol Educ 34: 41–43, 2010. doi:10.1152/advan.00001.2010. Link | ISI | Google Scholar8. Curran-Everett D. Explorations in statistics: permutation methods. Adv Physiol Educ 36: 181–187, 2012. doi:10.1152/advan.00072.2012. Link | ISI | Google Scholar9. Curran-Everett D. Explorations in statistics: statistical facets of reproducibility. Adv Physiol Educ 40: 248–252, 2016. doi:10.1152/advan.00042.2016. Link | ISI | Google Scholar10. Curran-Everett D. CORP: Minimizing the chances of false positives and false negatives. J Appl Physiol (1985) 122: 91–95, 2017. doi:10.1152/japplphysiol.00937.2016. Link | ISI | Google Scholar11. Curran-Everett D. Small steps to help improve the caliber of the reporting of statistics. Adv Physiol Educ 41: 321–323, 2017. doi:10.1152/advan.00049.2017. Link | ISI | Google Scholar12. Curran-Everett D, Benos DJ. Guidelines for reporting statistics in journals published by the American Physiological Society. Adv Physiol Educ 28: 85–87, 2004. doi:10.1152/advan.00019.2004. Link | ISI | Google Scholar13. Curran-Everett D, Benos DJ. Guidelines for reporting statistics in journals published by the American Physiological Society: the sequel. Adv Physiol Educ 31: 295–298, 2007. doi:10.1152/advan.00022.2007. Link | ISI | Google Scholar14. Curran-Everett D, Taylor S, Kafadar K. Fundamental concepts in statistics: elucidation and illustration. J Appl Physiol (1985) 85: 775–786, 1998. doi:10.1152/jappl.1998.85.3.775. Link | ISI | Google Scholar15. Edgeworth FY. Methods of statistics. J Stat Soc London Jubilee: 181–217, 1885.Google Scholar16. Fisher RA. Statistical Methods for Research Workers. London: Oliver and Boyd, 1925.Google Scholar17. Fisher RA. The arrangement of field experiments. J Minist Agric GB 33: 503–513, 1926.Google Scholar18. Fisher RA. Statistical Methods and Scientific Inference. London: Oliver and Boyd/Longman Group, 1956.Google Scholar19. Fletcher R. Walking Trees. Portsmouth, NH: Heinemann, 1991, p. 202.Google Scholar20. Gelman A, Stern H. The difference between "significant" and "not significant" is not itself statistically significant. Am Stat 60: 328–331, 2006. doi:10.1198/000313006X152649.Crossref | ISI | Google Scholar21. Halsey LG, Curran-Everett D, Vowler SL, Drummond GB. The fickle P value generates irreproducible results. Nat Methods 12: 179–185, 2015. doi:10.1038/nmeth.3288. Crossref | PubMed | ISI | Google Scholar22. Hurlbert SH, Levine RA, Utts J. Coup de grâce for a tough old bull: "statistically significant" expires. Am Stat 73, Suppl 1: 352–357, 2019. doi:10.1080/00031305.2018.1543616.Crossref | ISI | Google Scholar23. Johnson VE. Revised standards for statistical evidence. Proc Natl Acad Sci USA 110: 19313–19317, 2013. doi:10.1073/pnas.1313476110. Crossref | PubMed | ISI | Google Scholar24. Snedecor GW, Cochran WG. Statistical Methods (6th ed.). Ames, IA: Iowa State Univ. Press, 1967.Google Scholar25. Sterne JAC, Davey Smith G. Sifting the evidence—what's wrong with significance tests? BMJ 322: 226–231, 2001. doi:10.1136/bmj.322.7280.226. Crossref | PubMed | Google Scholar26. Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond "p < 0.05". Am Stat 73, Suppl 1: 1–19, 2019. doi:10.1080/00031305.2019.1583913.Crossref | ISI | Google Scholar27. Wasserstein RL, Lazar NA. The ASA's statement on p-values: context, process, and purpose. Am Stat 70: 129–133, 2016. doi:10.1080/00031305.2016.1154108.Crossref | ISI | Google Scholar28. Zhang D, Jin C, Obi IE, Rhoads MK, Soliman RH, Sedaka RS, Allan JM, Tao B, Speed JS, Pollock JS, Pollock DM. Loss of circadian gene Bmal1 in the collecting duct lowers blood pressure in male, but not female, mice. Am J Physiol Renal Physiol 318: F710–F719, 2020. doi:10.1152/ajprenal.00364.2019. Link | ISI | Google ScholarFOOTNOTES1In contrast to the limitations of a binary benchmark in statistics, a binary decision in medicine is typically essential: a surgeon will either operate or not, or an oncologist will either prescribe chemotherapy or not. There may be uncertainty associated with the decision—will there be a good result?—but a clinical decision must be made.2In this revision I have deleted the standard error associated with each mean pressure (11, 12). Because Ref. 28 did not report actual P values, I have used P = 0.00 as a placeholder.AUTHOR NOTESAddress for reprint requests and other correspondence: D. Curran-Everett, Div. of Biostatistics and Bioinformatics, M222 National Jewish Health, 1400 Jackson St., Denver, CO 80206–2761 (e-mail: [email protected]org). Download PDF Back to Top Next FiguresReferencesRelatedInformation CollectionsAdvances in Physiology Education CollectionsReporting StatisticsStatistics Cited BySix months of unsupervised exercise training lowers blood pressure during moderate, but not vigorous, aerobic exercise in adults with well-healed burn injuriesJoseph C. Watso, Steven A. Romero, Gilbert Moralez, Mu Huang, Matthew N. Cramer, Elias Johnson, and Craig G. Crandall12 September 2022 | Journal of Applied Physiology, Vol. 133, No. 3Low-dose morphine reduces tolerance to central hypovolemia in healthy adults without affecting muscle sympathetic outflowJoseph C. Watso, Luke N. Belval, Frank A. Cimino, Bonnie D. Orth, Joseph M. Hendrix, Mu Huang, Elias Johnson, Josh Foster, Carmen Hinojosa-Laborde, and Craig G. Crandall7 June 2022 | American Journal of Physiology-Heart and Circulatory Physiology, Vol. 323, No. 1Low-dose morphine reduces pain perception and blood pressure, but not muscle sympathetic outflow, responses during the cold pressor testJoseph C. Watso, Luke N. Belval, Frank A. Cimino, Bonnie D. Orth, Joseph M. Hendrix, Mu Huang, Elias Johnson, Josh Foster, Carmen Hinojosa-Laborde, and Craig G. Crandall7 July 2022 | American Journal of Physiology-Heart and Circulatory Physiology, Vol. 323, No. 1Low-dose fentanyl reduces pain perception, muscle sympathetic nerve activity responses, and blood pressure responses during the cold pressor testJoseph C. Watso,* Mu Huang,* Luke N. Belval, Frank A. Cimino, Caitlin P. Jarrard, Joseph M. Hendrix, Carmen Hinojosa-Laborde, and Craig G. Crandall31 December 2021 | American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, Vol. 322, No. 1Skeletal myopathy in a rat model of postmenopausal heart failure with preserved ejection fractionRachel C. Kelley, Lauren Betancourt, Andrea M. Noriega, Suzanne C. Brinson, Nuria Curbelo-Bermudez, Dongwoo Hahn, Ravi A. Kumar, Eliza Balazic, Derek R. Muscato, Terence E. Ryan, Robbert J. van der Pijl, Shengyi Shen, Coen A. C. Ottenheijm, and Leonardo F. Ferreira7 January 2022 | Journal of Applied Physiology, Vol. 132, No. 1The impact of acute central hypovolemia on cerebral hemodynamics: does sex matter?Alexander J. Rosenberg, Victoria L. Kay, Garen K. Anderson, My-Loan Luu, Haley J. Barnes, Justin D. Sprick, Hannah B. Alvarado, and Caroline A. Rickards14 June 2021 | Journal of Applied Physiology, Vol. 130, No. 6A comparison of protocols for simulating hemorrhage in humans: step versus ramp lower body negative pressureAlexander J. Rosenberg, Victoria L. Kay, Garen K. Anderson, Justin D. Sprick, and Caroline A. Rickards18 February 2021 | Journal of Applied Physiology, Vol. 130, No. 2 More from this issue > Volume 44Issue 2June 2020Pages 221-224 Copyright & PermissionsCopyright © 2020 the American Physiological Societyhttps://doi.org/10.1152/advan.00054.2020PubMed32412384History Received 19 March 2020 Accepted 7 April 2020 Published online 15 May 2020 Published in print 1 June 2020 Metrics

Referência(s)