Main Page | See live article | Alphabetical index

Linear independence

In linear algebra, a set of elements of a vector space is linearly independent if none of the vectorss in the set can be written as a linear combination of finitely many other vectors in the set. For instance, in three-dimensional Euclidean space R3, the three vectors (1, 0, 0), (0, 1, 0) and (0, 0, 1) are linearly independent, while (2, -1, 1), (1, 0, 1) and (3, -1, 2) are not (since the third vector is the sum of the first two). Vectors which are not linearly independent are called linearly dependent.

Table of contents
1 Definition
2 Example I
3 Example II
4 Example III: (Calculus required)

Definition

Let V be a vector space over a field K. If v1,v2,..,vn are elements of V, we say that they are linearly dependent over K if there exist elements a1,a2,..,an in K not all equal to zero such that:

or, more concisely:

(Note that the zero on the right is the zero element in V, not the zero element in K.)

If there do not exist such field elements, then we say that v1,v2,...,vn are linearly independent. An infinite subset of V is said to linearly independent if all its finite subsets are linearly independent.

To focus the definition on linear independence, we can say that the vectors v1,v2,..,vn are linearly independent, if and only if the following condition is satisfied:

Whenever a1,a2,...,an are elements of K such that:
a1v1 + a2v2 + ... + anvn = 0
then ai = 0 for all i=1,2,...,n.

The concept of linear independence is important because a set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space.

Example I

Show that the vectors (1,1) and (-3,2) in R2 are linearly independent.

Proof:

Let a, b be two real numbers such that:

a(1,1) + b(-3,2) = (0,0)

Then:

(a-3b,a+2b) = (0,0) and

a-3b = 0 and a+2b = 0.

Solving for a and b, we find that a = 0 and b = 0.

Example II

Let V=Rn and consider the following elements in V:

e1 = (1,0,0,...,0)
e2 = (0,1,0,...,0)
...
en = (0,0,0,...,1)

Then e1,e2,...,en are linearly independent.

Proof:

Suppose that a1, a2, ,an are elements of Rn such that

a1e1 + a2e2 + ... + anen = 0

Since

a1e1 + a2e2 + .. + anen = (a1,a2,..,an)

then ai = 0 for all i in {1,..,n}.

Example III: (Calculus required)

Let V be the vector space of all functions of a real variable t. Then the functions et and e2t in V are linearly independent.

Proof:

Suppose a and b are two real numbers such that

aet + be2t = 0   (1)

for all values of t. We need to show that a=0 and b=0. In order to do this,

we differentiate equation (1) to get

aet + 2be2t = 0   (2)

which also holds for all values of t.

Subtracting the first relation from the second relation, we obtain:

be2t = 0

and, by plugging in t = 0, we get b = 0.

From the first relation we then get:

aet = 0

and again for t = 0 we find a = 0.

A linear dependence among vectors v1,...,vn is a vector (a1,...,an) with n scalar components, not all zero, such that a1v1+...+anvn=0. If such a linear dependence exists, then the n vectors are linearly dependent. It makes sense to identify two linear dependences if one arises as a non-zero multiple of the other, because in this case the two describe the same linear relationship among the vectors. Under this identification, the set of all linear dependences among v1, ...., vn is a projective space.

See also: