In research and statistical analysis, the degree to which an observed difference between two test groups or conditions is unlikely to have occurred by chance alone. In testing for significance, an initial assumption is made that there is no meaningful difference between the groups or conditions under investigation (the null hypothesis). Various statistical procedures are then applied, and a result indicating that there is a probability of less than 5% (P value <0.05) that the difference occurred by chance is usually sufficient to overturn the null hypothesis: the observed difference is statistically significant. Some tests are parametric tests, based on the assumption that observations will be distributed in a normal or Gaussian distribution, where 95% of observations lie within two standard deviations of the mean (Student’s t test to compare means). Nonparametric tests (Mann-Whitney U tests) make no assumptions about distribution patterns. See also standard error of the mean.