A measure of the confidence that we can have in the results obtained from a psychological test. A key question is whether the variability in the scores obtained by different individuals is due to real differences between the individuals or to chance variations resulting from inadequacies in the testing process. The ratio
is the reliability index and its square root is the reliability coefficient. The true scores are unknown, but the coefficient can be estimated by using repeat tests. Suppose n individuals are given k similar tests. Let xm be the total score obtained by individual m over the k tests and let x̄ and s2 be the mean and variance, respectively, of x1, x2,…, xn. If each test consists of a single question for which an answer is either correct or incorrect, then let pj represent the proportion of correct answers to question j. Alternatively, if a variety of scores are possible on test j, let s2j be the variance of those obtained. An approximation to the reliability coefficient is provided by Cronbach’s alpha, given by
and other approximations are provided by the Kuder–Richardson formulae KR20 and KR21 (named after the equation numbers in Kuder and Richardson's 1937 paper):
An alternative approach is the split-half method, in which each test provides two scores (for example, the score on the even questions and the score on the corresponding odd questions). Let r be the correlation coefficient between the scores obtained on the odd questions and the scores obtained on the even questions. The Spearman–Brown formula measures the reliability as 
See also test-retest reliability.