Main Page | See live article | Alphabetical index

Matrix inversion

Matrix inversion is the following problem in linear algebra: given a square n-by-n matrix A, find a square n-by-n matrix B (if one exists) such that AB = BA = In, the n-by-n identity matrix.

The Gauss-Jordan elimination is an algorithm that can be used to determine whether a given matrix is invertible and to find the inverse. An alternative is the Cholesky decomposition which generates two upper triangular matrices which are easier to invert. For special purposes, it may be convenient to invert matrices by treating mn-by-mn matrices as m-by-m matrices of n-by-n matrices, and applying one or another formula recursively (other sized matrices can be padded out with dummy rows and columns). For other purposes, a variant of Newton's method may be convenient (particularly when dealing with families of related matrices, so inverses of earlier matrices can be used to seed generating inverses of later matrices).

Writing another special matrix of cofactorss, known as an adjoint matrix, can also be an efficient way to calculate the inverse of small matrices (since this method is essentially recursive, it becomes inefficient for large matrices). To determine the inverse, we calculate a matrix of cofactors:

where is the determinant of A, is the matrix cofactor, and represents the matrix transpose.

In most practical applications, it is in fact not necessary to invert a matrix, but only to solve a system of linear equations. Various fast algorithms for special classes of such systems have been developed.