A sequence of random variables x1,…,xn,… converges in mean squares to a random variable x if E[x2] and E[xn2] exist and the expectation of the squared (Euclidean) distance between xn and x converges to zero as n tends to infinity: In particular, x can be a constant, x = θ. In this case convergence of xn to θ in mean squares is equivalent to the convergence of the bias and the variance of xn to zero as n tends to infinity. Convergence in mean squares implies convergence in probability (the converse does not hold, in general). This is a particular case of convergence in the pth mean (or in Lp norm) defined as E[xp], E[xnp] exist and Convergence in pth mean implies convergence in rth mean for every r∈ (1, p).