Skip to Main content Skip to Navigation
Journal articles

Reproducibility issues in science, is P value really the only answer?

Abstract : Reproducibility issues in science, is P value really the only answer? Johnson describes the lack of reproducibility of scientific studies, attributed, according to the author, to the low level of significance (1). We appreciate the quality of this work and its importance for the interpretation of statistical evidence. These results should be considered in statistical guidelines. Nevertheless, we would like to point out some important points not thoroughly discussed in this publication. Not publishing " nonsignificant " results leads to the well-known publication bias whereby studies with low statistical power are underrepresented. This bias would become more severe, despite recommendations to allow for publication of " negative " results. Lowering the significance level will further increase the type II error, which is clinically as important as type I error. Fo-cusing only on the type I error may lead to an excessive false nondiscovery rate. In the case of severe diseases, it is not uncommon to fix a significance level at 0.1 (2), at the early stages, to avoid excluding an effective treatment. Johnson argues that this may be corrected by increasing the sample size. However, increasing the size of clinical trials will reduce their feasibility and increase their duration. Aside from these issues, including more patients means exposing more patients to an experimental treatment and may challenge the equipoise concept. The issue of fixing a threshold defining significance refers to the Fisher–Pearson controversy. Estimating a P value is needed to quantify the strength of evidence. However, fixing a threshold is needed to make a decision controlling for the risk of type I and type II error. Actually, regarding the issue addressed by Johnson, it would be interesting to assess if a priori specification of the threshold is required, or if research results could be compared using the P value and the magnitude of the tested statistic. The issue of significance level is only the tip of the iceberg. Indeed, design issues should not be overlooked when discussing lack of reproducibility. Selection bias leads to extrapolation of results to a population different from the target population (3). Furthermore, the " poor reporting " practice highlighted by Altman et al. (4) and the lack of compliance to reporting recommendations (e.g., Consolidated Standards of Reporting Trials) hinder a proper assessment of the quality of the study and hide selection bias or misuse of statistical tests; the latter leads to nonreproducibility of the reported research. In an extreme example, monthly American Air passengers and the Australian electricity production in the late 1950s are highly correlated (Pearson's correlation = 0.88, P = 8.8 × 10 −13) without any meaning. The causality criteria defined by Hill (5) highlight other important considerations in the interpretation of results. Reliance on P values remains surprisingly widespread, but good decision making depends on the magnitude of effects, the plausibility of scientific
Complete list of metadatas

https://hal-amu.archives-ouvertes.fr/hal-01307492
Contributor : Jean Gaudart <>
Submitted on : Tuesday, April 26, 2016 - 5:52:41 PM
Last modification on : Tuesday, May 14, 2019 - 6:50:14 PM
Long-term archiving on: : Wednesday, July 27, 2016 - 3:00:14 PM

File

pnas.201323051.pdf
Publication funded by an institution

Licence


Distributed under a Creative Commons Attribution - NonCommercial - NoDerivatives 4.0 International License

Identifiers

Collections

Citation

Jean Gaudart, Laetitia Huiart, P. J. Milligan, Rodolphe Thiebaut, Roch Giorgi. Reproducibility issues in science, is P value really the only answer?. Proceedings of the National Academy of Sciences of the United States of America , National Academy of Sciences, 2014, 111 (19), pp.e1934. ⟨10.1073/pnas.1323051111⟩. ⟨hal-01307492⟩

Share

Metrics

Record views

1346

Files downloads

355