P-value
In null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience.
In 2016, the American Statistical Association made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result", and "does not provide a good measure of evidence regarding a model or hypothesis" without "context or other evidence". That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".
Basic concepts
In statistics, every conjecture concerning the unknown probability distribution of a collection of random variables representing the observed data in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not to investigate other specific hypotheses, then such a test is called a null hypothesis test.As our statistical hypothesis will, by definition, state some property of the distribution, the null hypothesis is the default hypothesis under which that property does not exist. The null hypothesis is typically that some parameter in the populations of interest is zero. Our hypothesis might specify the probability distribution of precisely, or it might only specify that it belongs to some class of distributions. Often, we reduce the data to a single numerical statistic, e.g.,, whose marginal probability distribution is closely connected to a main question of interest in the study.
The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic. The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.
Loosely speaking, rejection of the null hypothesis implies that there is sufficient evidence against it.
As a particular example, if a null hypothesis states that a certain summary statistic follows the standard normal distribution then the rejection of this null hypothesis could mean that the mean of is not 0, or the variance of is not 1, or is not normally distributed. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. However, even if we do manage to reject the null hypothesis for all 3 alternatives, and even if we know that the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible. The more independent observations from the same probability distribution one has, the more accurate the test will be, and the higher the precision with which one will be able to determine the mean value and show that it is not equal to zero; but this will also increase the importance of evaluating the real-world or scientific relevance of this deviation.
Definition and interpretation
Definition
The p-value is the probability under the null hypothesis of obtaining a real-valued test statistic at least as extreme as the one obtained. Consider an observed test-statistic from unknown distribution. Then the p-value is what the prior probability would be of observing a test-statistic value at least as "extreme" as if null hypothesis were true. That is:- for a one-sided right-tail test-statistic distribution.
- for a one-sided left-tail test-statistic distribution.
- for a two-sided test-statistic distribution. If the distribution of is symmetric about zero, then
Interpretations
Different p-values based on independent sets of data can be combined, for instance using Fisher's combined probability test.
Distribution
The p-value is a function of the chosen test statistic and is therefore a random variable. If the null hypothesis fixes the probability distribution of precisely, and if that distribution is continuous, then when the null-hypothesis is true, the p-value is uniformly distributed between 0 and 1. Regardless of the truth of the, the p-value is not fixed; if the same test is repeated independently with fresh data, one will typically obtain a different p-value in each iteration.Usually only a single p-value relating to a hypothesis is observed, so the p-value is interpreted by a significance test, and no effort is made to estimate the distribution it was drawn from. When a collection of p-values are available, the distribution of significant p-values is sometimes called a p-curve.
A p-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias or p-hacking.
Distribution for composite hypothesis
In parametric hypothesis testing problems, a simple or point hypothesis refers to a hypothesis where the parameter's value is assumed to be a single number. In contrast, in a composite hypothesis the parameter's value is given by a set of numbers. When the null-hypothesis is composite, then when the null-hypothesis is true the probability of obtaining a p-value less than or equal to any number between 0 and 1 is still less than or equal to that number. In other words, it remains the case that very small values are relatively unlikely if the null-hypothesis is true, and that a significance test at level is obtained by rejecting the null-hypothesis if the p-value is less than or equal to.For example, when testing the null hypothesis that a distribution is normal with a mean less than or equal to zero against the alternative that the mean is greater than zero, the null hypothesis does not specify the exact probability distribution of the appropriate test statistic. In this example, that would be the Z-statistic belonging to the one-sided one-sample Z-test. For each possible value of the theoretical mean, the Z-test statistic has a different probability distribution. In these circumstances, the p-value is defined by taking the least favorable null-hypothesis case, which is typically on the border between null and alternative.
This definition ensures the complementarity of p-values and alpha-levels: means one only rejects the null hypothesis if the p-value is less than or equal to, and the hypothesis test will indeed have a maximum type-1 error rate of.
Usage
The p-value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing. In this method, before conducting the study, one first chooses a model and the alpha level α. After analyzing the data, if the p-value is less than α, that is taken to mean that the observed data is sufficiently inconsistent with the null hypothesis for the null hypothesis to be rejected. However, that does not prove that the null hypothesis is false. The p-value does not, in itself, establish probabilities of hypotheses. Rather, it is a tool for deciding whether to reject the null hypothesis.Misuse
According to the ASA, there is widespread agreement that p-values are often misused and misinterpreted. One practice that has been particularly criticized is accepting the alternative hypothesis for any p-value nominally less than 0.05 without other supporting evidence. Although p-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis". Another concern is that the p-value is often misunderstood as being the probability that the null hypothesis is true. p-values and significance tests also say nothing about the possibility of drawing conclusions from a sample to a population.Some statisticians have proposed abandoning p-values and focusing more on other inferential statistics, such as confidence intervals, likelihood ratios, or Bayes factors, but there is heated debate on the feasibility of these alternatives. Others have suggested to remove fixed significance thresholds and to interpret p-values as continuous indices of the strength of evidence against the null hypothesis. Yet others suggested to report alongside p-values the prior probability of a real effect that would be required to obtain a false positive risk below a pre-specified threshold.
That said, in 2019 a task force by ASA had convened to consider the use of statistical methods in scientific studies, specifically hypothesis tests and p-values, and their connection to replicability. It states that "Different measures of uncertainty can complement one another; no single measure serves all purposes", citing p-value as one of these measures. They also stress that p-values can provide valuable information when considering the specific value as well as when compared to some threshold. In general, it stresses that "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data". This sentiment was further supported by a comment in Nature Human Behaviour, that, in response to recommendations to redefine statistical significance to P ≤ 0.005, have proposed that "researchers should transparently report and justify all choices they make when designing a study, including the alpha level."