In mathematics, a **matrix** (plural **matrices**) is a rectangular table of numbers or, more generally, of elements of a fixed ring. In this article, if unspecified, the entries of a matrix are always real or complex numbers.

Matrices are useful to record data that depends on two categories, and to keep track of the coefficients of systems of linear equations and linear transformations.

For the development and applications of matrices, see matrix theory.

The term is also used in other areas, see matrix.

`A[i][j]`

. In the example above,
The notation *A* = (*a*_{ij}) means that *A*[*i,j*] = *a*_{ij} for all indices *i* and *j*.

If we start with a ring *R*, we can consider the set M(*m*,*n*, *R*) of all *m* by *n* matrices with entries in *R*. Addition and multiplication of these matrices can be defined as in the case of real or complex matrices (see below). The set M(*n*, *R*) of all square *n* by *n* matrices over *R* is a ring in its own right, isomorphic to the endomorphism ring of the left *R* module *R*^{n}.

If *R* is commutative, then M(*n*, *R*) is a unitary associative algebra over *R*. It is then also meaningful to define the determinant of square matrices using the *Leibniz formula*; a matrix is invertible if and only if its determinant is invertible in *R*.

All statements mentioned in this articles for real or complex matrices remain correct for matrices over an arbitrary field.

Matrices over a polynomial ring are important in the study of control theory.

A **Partitioned Matrix** or *Block Matrix* is a matrix of matrices. For example, take a matrix *P*:

Certain special matrices are so important that they are given special names, as listed in list of matrices. Below are some examples:

- symmetric matrices are such that elements symmetric to the
*main diagonal*(from the upper left to the lower right) are equal, that is, a_{i,j}=a_{j,i}. - hermitian (or self-adjoint) matrices are such that elements symmetric to the diagonal are each others complex conjugates, that is, a
_{i,j}=a^{*}_{j,i}, where the superscript '*' signifies complex conjugation. - Toeplitz matrices have common elements on their diagonals, that is, a
_{i,j}=a_{i+1,j+1}. - Stochastic matrices are square matrices whose columns are probability vectors; they are used to define Markov chains.

If two *m*-by-*n* matrices *A* and *B* are given, we may define their **sum** *A + B* as the *m*-by-*n* matrix computed by adding corresponding elements, i.e.,
(*A + B*)[*i, j*] = *A*[*i, j*] + *B*[*i, j*]. For example

If a matrix *A* and a number *c* are given, we may define the **scalar multiplication** *cA* by
(*cA*)[*i*, *j*] = *cA*[*i*, *j*].
For example

**Multiplication** of two matrices is well-defined only if the number of columns of the first matrix is the same as the number of rows of the second matrix. If *A* is an *m*-by-*n* matrix (*m* rows, *n* columns) and *B* is an *n*-by-*p* matrix (*n* rows, *p* columns), then their **product** *AB* is the *m*-by-*p* matrix (*m* rows, *p* columns) given by

- (
*AB*)[*i*,*j*] =*A*[*i*, 1] **B*[1,*j*] +*A*[*i*, 2] **B*[2,*j*] + ... +*A*[*i*,*n*] **B*[*n*,*j*] for each pair*i*and*j*.

- (
`A``B`)*C*=*A*(`B``C`) for all*k*-by-*m*matrices*A*,*m*-by-*n*matrices*B*and*n*-by-*p*matrices*C*("associativity"). - (
*A + B*)*C*=*AC*+*BC*for all*m*-by-*n*matrices*A*and*B*and*n*-by-*k*matrices*C*("distributivity"). -
*C*(*A + B*) =*CA*+*CB*for all*m*-by-*n*matrices*A*and*B*and*k*-by-*m*matrices*C*("distributivity").

For other, less commonly encountered ways to multiply matrices, see matrix multiplication.

Matrices can conveniently represent linear transformations because matrix multiplication neatly corresponds to the composition of maps, as will be described next.

Here and in the sequel we identify **R**^{n} with the set of "rows" or *n*-by-1 matrices.
For every linear map *f* : **R**^{n} `->` **R**^{m} there exists a unique *m*-by-*n* matrix *A* such that *f*(*x*) = `A``x` for all *x* in **R**^{n}.
We say that the matrix *A* "represents" the linear map *f*.
Now if the *k*-by-*m* matrix *B* represents another linear map *g* : **R**^{m} `->` **R**^{k}, then the linear map *g* o *f* is represented by `B``A`. This follows from the above-mentioned associativity of matrix multiplication.

The rank of a matrix *A* is the dimension of the image of the linear map represented by *A*; this is the same as the dimension of the space generated by the rows of *A*, and also the same as the dimension of the space generated by the columns of *A*.

The transpose of an *m*-by-*n* matrix *A* is the *n*-by-*m* matrix *A*^{tr} (also sometimes written as *A*^{T} or ^{t}*A*) gotten by turning rows into columns and columns into rows, i.e. *A*^{tr}[*i*, *j*] = *A*[*j*, *i*] for all indices *i* and *j*. If *A* describes a linear map with respect to two bases, then the matrix *A*^{tr} describes the transpose of the linear map with respect to the dual bases, see dual space.

We have (*A + B*)^{tr} = *A*^{tr} + *B*^{tr} and (*AB*)^{tr} = *B*^{tr} * *A*^{tr}.

A **square matrix** is a matrix which has the same number of rows as columns. The set of all square *n*-by-*n* matrices, together with matrix addition and matrix multiplication is a ring. Unless *n* = 1, this ring is not commutative.

M(*n*, **R**) , the ring of real square matrices, is a real unitary associative algebra. M(*n*, **C**), the ring of complex square matrices, is a complex associative algebra.

The **unit matrix** or **identity matrix** *I _{n}*, with elements on the main diagonal set to 1 and all other elements set to 0, satisfies

Invertible elements in this ring are called **invertible matrices** or **non-singular matrices**. An *n* by *n* matrix *A* is invertible if and only if there exists a matrix *B* such that

*AB*= I_{n}( =*BA*).

If λ is a number and **v** is a non-zero vector such that *A***v** = λ**v**, then we call **v** an eigenvector of *A* and γ the associated eigenvalue. The number λ is an eigenvalue of *A* if and only if *A*−λ*I*_{n} is not invertible, which happens if and only if *p*_{A}(λ) = 0. Here *p*_{A}(*x*) is the characteristic polynomial of *A*. This is a polynomial of degree *n* and has therefore *n* complex roots (counting multiple roots according to their multiplicity). In this sense, every square matrix has *n* complex eigenvalues.

The determinant of a square matrix *A* is the product of its *n* eigenvalues, but it can also be defined by the *Leibniz formula*. Invertible matrices are precisely those matrices with nonzero determinant.

The Gauss-Jordan elimination algorithm is of central importance: it can be used to compute determinants, ranks and inverses of matrices and to solve systems of linear equations.

The trace of a square matrix is the sum of its diagonal entries, which equals the sum of its *n* eigenvalues.

See Glossary of matrix theory for more definitions in matrix theory.