Main Page | See live article | Alphabetical index


In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e. if there exists an invertible matrix P such that P -1AP is a diagonal matrix. If V is a finite-dimensionalal vector space, then a linear map T : VV is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.

Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power.

The fundamental fact about diagonalizable maps and matrices is expressed by the following:

Another characterization: A matrix or linear map is diagonalizable over the field F if and only if its minimal polynomial is a product of distinct linear factors over F.

The following sufficient (but not necessary) condition is often useful.

Here is an example of a diagonalizable matrix:

Since the matrix is triangular (specifically upper triangular), the eigenvalues are 5, 0, and -2. Since A is a 3-by-3 matrix with 3 real, distinct eigenvalues, A is diagonalizable over R.

As a rule of thumb, over C almost every matrix is diagonalizable. More precisely: the set of complex n-by-n matrices that are not diagonalizable over C, considered as a subset of Cn×n, is a null set with respect to the Lebesgue measure. The same is not true over R; as n increases, it becomes less and less likely that a randomly selected real matrix is diagonalizable over R.

An application

Diagonalization can be used to compute the powers of a matrix A efficiently, provided the matrix is diagonalizable. Suppose we have found that

is a diagonal matrix. Then

and the latter is easy to calculate since it only involves the powers of a diagonal matrix.

For example, consider the following matrix:

Calculating the various powers of M reveals a surprising pattern:

The above phenomenon can be explained by diagonalizing M. To accomplish this, we need a basis of R2 consisting of eigenvectors of M. One such eigenvector basis is given by
where ei denotes the standard basis of Rn. The reverse change of basis is given by

Straighforward calculations show that
Thus, a and b are the eigenvalues corresponding to u and v, respectively. By linearity of matrix multiplication, we have that
Switching back to the standard basis, we have
The preceding relations, expressed in matrix form, are
thereby explaining the above phenomenon.