Main Page | See live article | Alphabetical index

Perturbation theory (quantum mechanics)

In quantum mechanics, perturbation theory is a set of approximation schemes for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system and gradually turn on an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system (e.g. its energy levels and eigenstates) will be continuously generated from those of the simple system. We can therefore study the former based on our knowledge of the latter.

Table of contents
1 Applications of perturbation theory
2 Time-independent perturbation theory
3 Time-dependent perturbation theory

Applications of perturbation theory

Perturbation theory is an extremely important tool for describing real quantum systems, as it turns out to be very difficult to find exact solutions to the Schrödinger equation for Hamiltonians of even moderate complexity; most of the Hamiltonians to which we know exact solutions, such as the hydrogen atom, the quantum harmonic oscillator and the particle in a box, are too idealized to adequately describe most systems. Using perturbation theory, we can use the known solutions of these simple Hamiltonians to generate solutions for a wide range of more complicated systems. For example, by adding a perturbative electric potential to the quantum mechanical model of the hydrogen atom, we can calculate the tiny shifts in the spectral lines of hydrogen caused by the presence of an electric field (the Stark effect.) (Strictly speaking, though, if the external electric is uniform and extends to infinity, then there is no bound state at all and the electron would eventually tunnel out of the atom, no matter how weak the electric field is. The Stark effect is really a pseudoapproximation.)

The solutions produced by perturbation theory are not exact, but they are often extremely accurate. Typically, the results are expressed in terms of infinite power series that converge rapidly to the exact values when summed to higher order (but only up to a certain point, these series are typically asymptotic). In the theory of quantum electrodynamics (QED), in which the electron-photon interaction is treated perturbatively, the calculation of the electron's magnetic moment has been found to agree with experiment to eleven decimal places. In QED and other quantum field theories, special calculation techniques known as Feynman diagrams are used to systematically sum the power series terms.

Under some circumstances, perturbation theory is an invalid approach to take. This happens when the system we wish to describe cannot be described by a small perturbation imposed on some simple system. In quantum chromodynamics, for instance, the interaction of quarks with the gluon field cannot be treated perturbatively at low energies because the interaction energy becomes too large. Perturbation theory also fails to describe states are not generated continuously, including bound states and various collective phenomena such as solitons. Imagine, for example, that we have a system of free (i.e. non-interacting) particles, to which an attractive interaction is introduced. Depending on the form of the interaction, this may create an entirely new set of eigenstates corresponding to groups of particles tightly bound to one another. An example of this phenomenon may be found in conventional superconductivity, in which the phonon-mediated attraction between conduction electrons leads to the formation of correlated electron pairs known as Cooper pairs. When faced with such systems, one usually turns to other approximation schemes, such as the variational method and the WKB approximation.

The problem of non-perturbative systems has been somewhat alleviated by the advent of modern computers. It has become practical to obtain numerical non-perturbative solutions for certain problems, using methods such as density functional theory. These advances have been of particular benefit to the field of quantum chemistry. Computers have also been used to carry out perturbation theory calculations to extraordinarily high levels of precision, which has proven important in particle physics for generating theoretical results that can be compared with experiment.

Time-independent perturbation theory

There are two categories of perturbation theory: time-independent and time-dependent. In this section, we discuss time-independent perturbation theory, in which the perturbation Hamiltonian is static (i.e., possesses no time dependence.) Time-independent perturbation theory was invented by Erwin Schrödinger in 1926, shortly after he invented wave mechanics.

We begin with an unperturbed Hamiltonian H0, which is also assumed to have no time dependence. It has known energy levels and eigenstates, arising from the time-independent Schrödinger equation:

For simplicity, we have assumed that the energies are discrete. The (0) subscripts denote that these quantities are associated with the unperturbed system.

We now introduce a perturbation to the Hamiltonian. Let V be a Hamiltonian representing a weak physical disturbance, such as a potential energy produced by an external field. (Thus, V is formally a Hermitian operator.) Let λ be a dimensionless parameter that can take on values ranging continuously from 0 (no perturbation) to 1 (the full perturbation). The perturbed Hamiltonian is

The energy levels and eigenstates of the perturbed Hamiltonian are again given by the Schrödinger equation:

Our goal is to express En and |n> in terms of the energy levels and eigenstates of the old Hamiltonian. If the perturbation is sufficiently weak, we can write them as power series in λ:

When λ = 0, these reduce to the unperturbed values, which are the first term in each series. Since the perturbation is weak, the energy levels and eigenstates should not deviate too much from their unperturbed values, and the terms should rapidly become smaller as we go to higher order.

Plugging the power series into the Schrödinger equation, we obtain

Expanding this equation and comparing coefficients of each power of λ results in an infinite series of simultaneous equations. The zeroth-order equation is simply the Schrödinger equation for the unperturbed system. The first-order equation is

This leads to the first-order energy shift:

This is simply the expected value of the perturbation Hamiltonian while the system is in the unperturbed state. This result can be interpreted in the following way: suppose the perturbation is applied, but we keep the system in the quantum state |n(0)>, which is a valid quantum state though no longer an energy eigenstate. The perturbation causes the average energy of the system to increase by <n(0)|V|n(0)>. The true energy shift is slightly different, because we must consider the perturbed eigenstate |n> these further shifts are given by the second and higher order deviations.

To obtain the first-order deviation in the energy eigenstate, we insert our expression for the first-order energy shift back into the above equation between the first-order coefficients of λ. We then make use of the resolution of the identity,

The result is

For the moment, suppose that this energy level is not degenerate, i.e. there is no other eigenstate with the same energy. The operator on the left hand side therefore has a well-defined inverse, and we get

The first-order change in the n-th energy eigenket has a contribution from each of the energy eigenstates k ≠ n. Each term is proportional to the matrix element <k(0)|V|n(0)>, which is a measure of how much the perturbation mixes eigenstate n with eigenstate k; it is also inversely proportional to the energy difference between eigenstates k and n, which means that the perturbation deforms the eigenstate to a greater extent if there are more eigenstates at nearby energies. We see also that the expression is singular if any of these states have the same energy as state n, which is why we assumed that there is no degeneracy.

We can find the higher-order deviations by a similar procedure, though the calculations become quite tedious with our current formulation. For example, the second-order energy shift is

Effects of degeneracy

Suppose that two or more energy eigenstates are degenerate. Our above calculation for the first-order energy shift is unaffected, but the calculation of the change in the eigenstate is problematized because the operator

does not having a well-defined inverse.

This is actually a conceptual, rather than mathematical, problem. Imagine that we have two or more perturbed eigenstates with different energies, which are continuously generated from an equal number of unperturbed eigenstates that are degenerate. Let D denote the subspace spanned by these degenerate eigenstates. The problem lies in the fact that there is no unique way to choose a basis of energy eigenstates for the unperturbed system. In particular, we could construct a different basis for D by choosing different linear combinations of the spanning eigenstates. In such a basis, the unperturbed eigenstates would not continuously generate the perturbed eigenstates.

We thus see that, in the presence of degeneracy, perturbation theory does not work with an arbitrary choice of energy basis. We must instead choose a basis so that the perturbation Hamiltonian is diagonal in the degenerate subspace D. In other words,

In that case, our equation for the first-order deviation in the energy eigenstate reduces to

The operator on the left hand side is not singular when applied to eigenstates outside D, so we can write

Time-dependent perturbation theory

Time-dependent perturbation theory, developed by Paul Dirac, studies the effect of a time-dependent perturbation V(t) applied to a time-independent Hamiltonian H0. Since the perturbed Hamiltonian is time-dependent, so are its energy levels and eigenstates. Therefore, the goals of time-dependent perturbation theory are slightly different from time-independent perturbation theory. We are interested in the following quantities:

The first quantity is important because it gives rise to the classical result of an A measurement performed on a macroscopic number of copies of the perturbed system. For example, we could take A to be the displacement in the x-direction of the electron in a hydrogen atom, in which case the expected value, when multiplied by an appropriate coefficient, gives the time-dependent electrical polarization of a hydrogen gas. With an appropriate choice of perturbation (i.e. an oscillating electric potential), this allows us to calculate the AC permittivity of the gas.

The second quantity looks at the time-dependent probability of occupation for each eigenstate. This is particularly useful in laser physics, where one is interested in the populations of different atomic states in a gas when a time-dependent electric field is applied. These probabilities are also useful for calculating the "quantum broadening" of spectral lines (see line broadening).

We will briefly examine the ideas behind Dirac's formulation of time-dependent perturbation theory. Choose an energy basis {|n>} for the unperturbed system. (We will drop the (0) superscripts for the eigenstates, because it is not meaningful to speak of energy levels and eigenstates for the perturbed system.)

If the unperturbed system is in eigenstate |j> at time t = 0, its state at subsequent times varies only by a phase (we are following the Schrödinger picture, where state vectors evolve in time and operators are constant):

We now introduce a time-dependent perturbing Hamiltonian V(t). The Hamiltonian of the perturbed system is

Let |&psi(t)> denote the quantum state of the perturbed system at time t. It obeys the time-dependent Schrödinger equation,

The quantum state at each instant can be expressed as a linear combination of the basis {|n>}. We can write the linear combination as

where the cn(t)s are undetermined complex functions of t which we will refer to as amplitudes (strictly speaking, they are the amplitudes in the Dirac picture.) We have explicitly extracted the exponential phase factors exp(-iEnt/h) on the right hand side. This is only a matter of convention, and may be done without loss of generality. The reason we go to this trouble is that when the system starts in the state |j> and no perturbation is present, the amplitudes have the convenient property that, for all t, cj(t) = 1 and cn(t) = 0 if n≠j.

The absolute square of the amplitude cn(t) is the probability that the system is in state n at time t, since

Plugging into the Schrödinger equation and using the fact that ∂/∂t acts by a chain rule, we obtain

By resolving the identity in front of V, this can be reduced to a set of partial differential equations for the amplitudes:

The matrix elements of V play a similar role as in time-independent perturbation theory, being proportional to the rate at which amplitudes are shifted between states. Note, however, that the direction of the shift is modified by the exponential phase factor. Over times much longer than the energy difference Ek-En, the phase winds many times. If the time-dependence of V is sufficiently slow, this may cause the state amplitudes to oscillate. Such oscillations are useful for managing radiative transitions in a laser.

Up to this point, we have made no approximations, so this set of differential equations is exact. By supplying appropriate initial values cn(0), we could in principle find an exact (i.e. non-perturbative) solution. This is easily done when there are only two energy levels (n = 1, 2), and the solution is useful for modelling systems like the ammonia molecule. However, exact solutions are difficult to find when there are many energy levels, and one instead looks for perturbative solutions, which may be obtained by putting the equations in an integral form:

By repeatedly substituting this expression for cn back into right hand side, we get an iterative solution

where, for example, the first-order term is

Many further results may be obtained, such as Fermi's golden rule, which relates the rate of transitions between quantum states to the density of states at particular energies, and the Dyson series, obtained by applying the iterative method to the time evolution operator, which is one of the starting points for the method of Feynman diagrams.