Integration by parts
In mathematics, integration by parts is a general rule that transforms the integral of calculus of products of functions into other integrals. The objective is that these are simpler. The rule arises from the product rule of differentiation.
Suppose f(x) and g(x) are two continuously differentiable functions. Then the rule states
or in a shorter form, if we let
u =
f(
x),
v =
g(
x) and the differentials
du =
f′(
x)
dx and
dv =
g′(
x)
dx, then it is in the form in which it is most often seen:
A discrete analogue for sequences, called
summation by parts, exists.
Note that the original integral contains the derivative of g; in order to be able to apply the rule, you need to find its antiderivative g and then you still have to evaluate the resulting integral of ∫g f ' dx.
An alternative notation has the advantage that the factors of the original expression are identified as f and g, but the drawback of a nested integral:
This formula is valid whenever
f is continuously differentiable and
g is continuous.
If we combine the first formula above with the fundamental theorem of calculus, definite integrals can also be integrated by parts. If we evaluate both sides of the formula between a and b and assume f(x) and g(x) are continuous, by applying the Fundamental Theorem of Calculus, we obtain this useful formula:
The rule is helpful whenever you need to integrate a function h(x) and you are able to break it up into a product of two functions, h(x) = f(x)g(x), in such a way that you know how to differentiate f, how to integrate g, and how to deal with the resulting integral of f ' times the integral of g.
In order to calculate:
Let:
 u = x\, so that du = dx,
 dv = cos(x) dx, so that v = sin(x).
Then:


where C is an arbitrary constant of integration.
By repeatedly using integration by parts, integrals such as
can be computed in the same fashion: each application of the rule lowers the power of
x by one.
An interesting example that is commonly seen is:
where, strangely enough, in the end, you don't have to do the actual integration.
This example uses integration by parts twice. First let:
 u = e^{x}; thus du = e^{x}dx
 v = sin(x); thus dv = cos(x)dx
Then:
Now, to evaluate the remaining integral, we use integration by parts again, with:
 u = e^{x}; du = e^{x}dx
 v = cos(x); dv = sin(x)dx
Then:
Putting these together, we get
Notice that the same integral shows up on both sides of this equation. So you can simply add the integral to both sides to get:
The other two famous examples are when you take something which isn't a product as a product of 1 and itself, and use integration by
parts. This works if you know how to differentiate the function you want to integrate, and you also know how to integrate this derivative times
x.
The first example is ∫ ln(x) dx. Write this as:
Let:
 u = ln(x); du = 1/x dx
 v = x; dv = 1·dx
Then:



where, again, C is the
arbitrary constant of integration
The second example is ∫ arctan(x) dx, where arctan(x) is the inverse tangent function. Rewrite this as:
Now let:
 u = arctan(x); du = 1/(1+x^{2}) dx
 v = x; dv = 1·dx
Then:
using a combination of the
inverse chain rule method and the
natural logarithm integral condition.
Integration by parts follows from the product rule of differentiation: If the two continuously differentiable functions u(x) and v(x) are given, the product rule states that

By integrating both sides, we get

The latter integral can be written as the sum of two integrals since integration is linear:

(the fact that
u and
v are
continuously differentiable ensures that the two individual integrals exist.)
Subtracting ∫
uv' d
x from both sides yields the desired formula of integration by parts.
Connection to distributions
When defining distributions, integration rather then differentiation is the fundamental operation. The derivatives of distributions are then defined so as to make integration by parts work.