An r×c matrix consists of a rectangular array with r rows and c columns, in which the elements are either numbers or algebraic expressions. Example matrices (the plural form) are:
When the array is not written out in full, a matrix is usually denoted by a bold-face capital letter, e.g. X, or by a typical element (or entry) from the array, shown in curly brackets, e.g. {xjk}, where xjk is the element in the jth row and kth column of the matrix. If r=c the matrix is square.
If a matrix X={xjk} is multiplied by the real number s, then the result is the matrix sX, in which the element in the jth row and kth column is sxjk. In this context a real number s is often referred to as a scalar.
Two matrices, A and B, can be multiplied together only if the number of columns of one matrix is equal to the number of rows of the other matrix. If A is an m×n matrix and B is an n×p matrix then the product AB is an m×p matrix. However, if p≠m then the product BA does not exist. The rule for the construction of the product is as follows. Let ejk denote the element in the jth row and kth column of the product AB, with ajk and bjk denoting typical elements in A and B. Then ejk is given by
If A and B have the same values of r and c and if ajk=bjk for all j and k, then A=B.
A diagonal matrix is a square matrix with all elements equal to 0, except for those on the leading diagonal (which runs from top-left to bottom-right). This diagonal is also called the main diagonal. A matrix (not necessarily square) in which all the entries are equal on every negatively sloping diagonal is a Toeplitz matrix. For example:
An identity matrix, usually denoted by I, is a diagonal matrix with all leading diagonal elements equal to 1. The size of an identity matrix may be indicated using a suffix: In is an n×n identity matrix.
The transpose of an m×n matrix M is the n×m matrix formed by interchanging the elements of the rows and columns of M. It is denoted by M′. The jth row of M′ is the transpose of the jth column of M and vice versa
If a square matrix S, with typical element sjk, is equal to its transpose, S′, then it is a symmetric matrix satisfying
A square matrix that is not symmetric is an asymmetric matrix. If a square matrix S satisfies the equation SS=S then it is idempotent. The product SS may be written as S2. If it exists, the inverse of a square matrix, S, is denoted by S−1. It satisfies the relations that
Only square matrices can have an inverse (but see ‘generalized inverse’ below). If S−1 exists then it will be the same size as S. A matrix that has an inverse is said to be non-singular (or regular, or invertible). A square matrix without an inverse is said to be singular.
A square matrix is described as being an upper triangular matrix if all the elements below the leading diagonal are zero, or as a lower triangular matrix if all the elements above the leading diagonal are zero. The matrices U and L are examples:
A generalized inverse (also called a Moore–Penrose inverse) of the m×n matrix M is any n×m matrix M− satisfying 
If a matrix M is multiplied by its transpose (to give either MM′ or M′M) then the result is a symmetric matrix.
If M is square and the product MM′ is an identity matrix, then M′=M−1 and M is said to be an orthogonal matrix.
A matrix with just one row is called a row vector. A matrix with just one column is called a column vector. Column vectors are usually denoted with a bold-face lower-case letter, e.g. x; row vectors are written as their transpose, e.g. x′. A vector with a single element (i.e. a 1×1 matrix) is a scalar.
Vectors multiply together in the same way as matrices (see above). Thus, if v is an n×1 column vector, and v′ is its transpose, then the product vv′ is an n×n symmetric matrix, and the product v′v is a scalar.
The set of n×1 vectors v1, v2,…, vm is linearly independent if the only values of the scalars a1, a2,…, am for which
where 0 is an n×1 vector with every element equal to 0, is a1=a2=⋯=am=0. If the set is not linearly independent then it is linearly dependent, in which case there are values for the scalars a1, a2,…, am, not all equal to 0, such that . If a set of two or more vectors is linearly dependent then at least one of the vectors, vk, say, is a linear combination of the others, i.e.
for some scalars b1, b2,…, bk−1, bk+1,…, bm.
The rank of a matrix is the maximum number of linearly independent rows, which is the same as the maximum number of linearly independent columns. Thus the rank of a matrix is equal to that of its transpose. If a matrix has r rows and c columns, with r≤c, then the rank is≤r; if r>c then the rank is≤c. If the rank is equal to the smaller of r and c then the matrix is of full rank.
If A is a square matrix, x is a column vector not equal to 0, and λ is a scalar such that
then x is an eigenvector of A and λ is the corresponding eigenvalue. Eigenvectors and eigenvalues are also referred to as characteristic vectors and characteristic values. If x is the column vector (x1 x2…xn)′ and A is an n×n symmetric matrix with typical element ajk, then the product x′Ax, which is a scalar, is described as a quadratic form because it is equal to
which is a linear combination of all the squared terms (such as x12) and cross-products (such as x1x2).
A symmetric matrix A is a positive definite matrix if x′Ax>0 for all non-zero x; it is a positive semi-definite matrix if x′Ax≥0 for all x and there is at least one non-zero x for which x′Ax=0.
The trace of a square matrix is the sum of the terms on the leading diagonal.
The determinant of a 2×2 square matrix, A, is written as |A| or det(A), and is given by
The determinant of a larger matrix is defined recursively in terms of cofactors. The cofactor Ajk of the entry ajk is equal to the product of (−1)j+k and the determinant of the matrix obtained by eliminating the jth row and kth column of A. The recursive definition is . In fact if k=l (otherwise the sum is 0). Similarly, if j=l and is otherwise 0. Thus, for a 3×3 matrix, A,
The eigenvalues of a square matrix A are the roots of the characteristic equation