Main Page | See live article | Alphabetical index

Addition

Addition is one of the basic operations of arithmetic. Addition combines two or more numbers, the summands, into a single number, the sum. (If there are only two terms, the summands are the augend and addend respectively.) For a definition of addition in the natural numbers, see Addition in N.

See also: counting

Table of contents
1 Important properties
2 Notation
3 Relationships to other operations and constants
4 Useful sums
5 Approximation by integrals

Important properties

When adding finitely many numbers, it doesn't matter how you group the numbers and in which order you add them; you will always get the same result. (See Associativity and Commutativity.) If you add zero to any number, the quantity won't change; zero is the identity element for addition. The sum of any number and its additive inverse (in contexts where such a thing exists) is zero.

Notation

If the terms are all written out individually, then addition is written using the plus sign ("+"). Thus, the sum of 1, 2, and 4 is 1 + 2 + 4 = 7. If the terms are not written out individually, then the sum may be written with an ellipsis to mark out the missing terms. Thus, the sum of all the natural numbers from 1 to 100 is 1 + 2 + ... + 99 + 100.

Alternatively, the sum can be represented by the summation symbol, which is the capital Sigma. This is defined as:

The subscript gives the symbol for a dummy variable, i. Here, i represents the index of summation; m is the lower bound of summation, and n is the upper bound of summation. So, for example:

One may also consider sums of infinitely many terms; these are called infinite series. Notationally, we would replace n above by the infinity symbol (∞). The sum of such a series is defined as the limit of the sum of the first n terms, as n grows without bound. That is:
One can similarly replace m with negative infinity, and
for some integer m, provided both limits exist.

Relationships to other operations and constants

It's possible to add fewer than 2 numbers. If you add the single term x, then the sum is x.

If you add zero terms, then the sum is zero, because zero is the identity for addition. This is known as the empty sum. These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if m = n in the definition above, then there is only one term in the sum; if m = n + 1, then there is none.

Many other operations can be thought of as generalised sums. If a single term x appears in a sum n times, then the sum is nx, the result of a multiplication. If n is not a natural number, then the multiplication may still make sense, so that we have a sort of notion of adding a term, say, two and a half times.

A special case is multiplication by -1, which leads to the concept of the additive inverse, and to subtraction, the inverse operation to addition.

The most general version of these ideas is the linear combination, where any number of terms are included in the generalised sum any number of times.

Useful sums

The following are useful identities:

The mathematics, behind this first identity, were demonstrated by Carl Friedrich Gauss, during the 18th Century
(see geometric series);
(see binomial coefficient);

In general, the sum of the first n mth powers is
where is the kth Bernoulli number.

The following are useful approximations (using theta notation):

for every real constant c greater than -1;
for every real constant c greater than 1;
for every nonnegative real constant c;
for all nonnegative real constants c and d;
for all nonnegative real constants b > 1, c, d.

Approximation by integrals

Many such approximations can be obtained by the following connection between sums and integrals, which holds for any increasing function f:

For more general approximations, see the Euler-Maclaurin formula.\n simple:Addition