Main Page | See live article | Alphabetical index

Cooley-Tukey FFT algorithm

The Cooley-Tukey algorithm is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size n = n1n2 in terms of smaller DFTs of sizes n1 and n2, recursively, in order to reduce the computation time to O(n log n) for highly-composite n. Because of the algorithm's importance, specific variants and implementation styles have become known by their own names, as described below.

This algorithm, including its recursive application, was already known around 1805 to Carl Friedrich Gauss, who used it to interpolate the trajectories of the asteroids Pallas and Juno, but his work was not widely recognized (being published only posthumously and in neo-Latin); Gauss did not analyze the asymptotic computational time, however. Various limited forms were also rediscovered several times throughout the 19th and early 20th centuries. FFTs became popular after J. W. Cooley of IBM and J. W. Tukey of Princeton published a paper in 1965 reinventing the algorithm and describing how to perform it conveniently on a computer (including how to arrange for the output to be produced in the natural ordering).

Because the Cooley-Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT. For example, Rader's or Bluestein's algorithm can be used to handle large prime factors that cannot be decomposed by Cooley-Tukey, or the Prime-factor algorithm can be exploited for greater efficiency in separating out relatively prime factors. See also the fast Fourier transform for information on other FFT algorithms, specializations for real and/or symmetric data, and accuracy in the face of finite floating-point precision.

Table of contents
1 The radix-2 DIT case
2 General factorizations
3 Data reordering, bit reversal, and in-place algorithms
4 References

The radix-2 DIT case

The simplest and most common form of the Cooley-Tukey algorithm (moreso in textbooks than in high-performance implementations, however) is called a radix-2 decimation-in-time (DIT) FFT: it divides the problem size into two interleaved halves with each recursive stage.

Recall that the DFT is defined by the formula:

Radix-2 DIT first computes the Fourier transforms of the even-indexed numbers x0, x2, ..., xn-2, and of the odd-indexed numbers x1, x3, ..., xn-1, and then combines those two results to produce the Fourier transform of the whole sequence. This idea can then be performed recursively to reduce the overall runtime to O(n log n). This simplified form assumes that n is a power of two; since the number of sample points n can usually be chosen freely by the application, this is often not an important restriction.

More explicitly, let us write n' = n/2 and denote the DFT of the even-indexed numbers x'0 = x0, x'1 = x2, ..., x'n'-1 = xn-2 by f0', ..., f 'n'-1; and the DFT of the odd-indexed numbers x''0 = x1, x''1 = x3, ..., x''n'-1 = xn-1 by f0'', ..., f ''n'-1. Then it follows:

This process is an example of the general technique of divide and conquer algorithms; in many traditional implementations, however, the explicit recursion is avoided, and instead one traverses the computational tree in breadth-first fashion.

The above re-expression of a size-n DFT as two size-n/2 DFTs is sometimes called the Danielson-Lanczos lemma, since the identity was noted by those two authors in 1942 (influenced by Runge's 1903 work). They applied their lemma in a "backwards" recursive fashion, repeatedly doubling the DFT size until the transform spectrum converged (although they apparently didn't realize the linearithmic asymptotic complexity they had achieved). The Danielson-Lanczos work predated widespread availability of computing machines and required hand calculation; they reported a computation time of 140 minutes for a size-64 DFT operating on real inputs (see below) to 3-5 significant digits. Cooley and Tukey's 1965 paper reported a running time of 0.02 minutes for a size-2048 complex DFT on an IBM 7094 (probably in 36-bit single precision, ~8 digits). Rescaling the time by the number of operations, this corresponds roughly to a speedup factor of around 800,000. The more modern FFT library FFTW, on a 2GHz Pentium 4 in 64-bit double precision (~16 digits), can compute a size-64 real-input DFT in 0.5μs and a size-2048 complex DFT in 50μs, speedups of about 16,000,000,000 and 20,000 over Danielson & Lanczos and Cooley & Tukey, respectively, not even including the considerable improvements in accuracy.

(140 minutes for size 64 may sound like a long time, but it corresponds to an average of at most 16 seconds per floating-point operation, around 20% of which are multiplications...this is a fairly impressive rate for a human being to sustain for almost two and a half hours, especially when you consider the bookkeeping overhead.)

General factorizations

More generally, Cooley-Tukey algorithms recursively re-express a DFT of a composite size n = n1n2 as:

  1. Perform n1 DFTs of size n2.
  2. Multiply by complex roots of unity called twiddle factors.
  3. Perform n2 DFTs of size n1.

Typically, either n1 or n2 is a small factor (not necessarily prime), called the radix (which can differ between stages of the recursion). If n1 is the radix, it is called a decimation in time (DIT) algorithm, whereas if n2 is the radix, it is decimation in frequency (DIF, also called the Sande-Tukey algorithm). The version presented above was a radix-2 DIT algorithm; in the final expression, the phase multiplying the odd transform is the twiddle factor, and the +/- combination (butterfly) of the even and odd transforms is a size-2 DFT. (The radix's small DFT is sometimes known as a butterfly, so-called because of the shape of the data-flow diagram for the radix-2 case.)

There are many other variations on the Cooley-Tukey algorithm. Mixed-radix implementations handle composite sizes with a variety of (typically small) factors in addition to two, usually (but not always) employing the O(n2) algorithm for the prime base cases of the recursion. Split-radix merges radices 2 and 4, exploiting the fact that the first transform of radix 2 requires no twiddle factor, in order to achieve a minimal operation count for power-of-two sizes. (On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts; well optimized FFT implementations often employ larger radices and/or hard-coded base-case transforms of significant size.) Another way of looking at the Cooley-Tukey algorithm is that it re-expresses a size n one-dimensional DFT as an n1 by n2 two-dimensional DFT (plus twiddles), where the output matrix is transposed. The net result of all of these transpositions, for a radix-2 algorithm, corresponds to a bit reversal of the input (DIF) or output (DIT) indices. If, instead of using a small radix, one employs a radix of roughly √n and explicit input/output matrix transpositions, it is called a four-step algorithm (or six-step, depending on the number of transpositions), initially proposed for memory locality (cache) optimization (Gentleman and Sande, 1966; also Bailey, 1990).

The general Cooley-Tukey factorization rewrites the indices j and k as j = n2 j1 + j2 and k = n1 k2 + k1, respectively, where the indices ja and ka run from 0..na-1 (for a of 1 or 2). That is, it re-indexes the input (k) and output (j) as n1 by n2 two-dimensional arrays in column-major and row-major order, respectively. When this reindexing is substituted into the DFT formula for jk, the n2j1n1k2 cross term vanishes (its exponential is unity), and the remaining terms give

where the inner sum is a DFT of size n2, the outer sum is a DFT of size n1, and the [...] bracketed term is the twiddle factor.

The 1965 Cooley-Tukey paper noted that one can employ an arbitrary radix r (as well as mixed radices), but failed to realize that the radix butterfly is itself a DFT that can use FFT algorithms. Hence, they reckoned the complexity to be O(r2 n/r logrn), and erroneously concluded that the optimal radix was 3 (the closest integer to e). Gauss also derived the algorithm for arbitrary radices, and gave explicit examples of both radix-3 and radix-6 steps.

Data reordering, bit reversal, and in-place algorithms

Although the abstract Cooley-Tukey factorization of the DFT, above, applies to all implementations of the algorithm, much greater diversity exists in the techniques for ordering and accessing the data at each stage of the FFT. Of special interest is the problem of devising an in-place algorithm that overwrites its input with its output data using only O(1) auxiliary storage.

The most well-known reordering technique involves explicit bit reversal for in-place radix-2 algorithms. Bit reversal is the permutation where the data at an index k, written in binary with digits b4b3b2b1b0 (e.g. 5 digits for n=32 inputs), is transferred to the index with reversed digits b0b1b2b3b4 . Consider the last stage of a radix-2 DIT algorithm like the one presented above, where the output is written in-place over the input: when fj' and fj'' are combined with a size-2 DFT, those two values are overwritten by the outputs. However, the two output values should go in the first and second halves of the output array, corresponding to the most significant bit b4 (for n=32); whereas the two inputs fj' and fj'' are interleaved in the even and odd elements, corresponding to the least significant bit b0. Thus, in order to get the output in the correct place, these two bits must be swapped in the input. If you include all of the recursive stages of a radix-2 DIT algorithm, all the bits must be swapped and thus one must pre-process the input with a bit reversal to get in-order output. Correspondingly, the reversed (dual) algorithm is radix-2 DIF, and this takes in-order input and produces bit-reversed output, requiring a bit-reversal post-processing step. Alternatively, some applications (such as convolution) work equally well on bit-reversed data, so one can do radix-2 DIF without bit reversal, followed by processing, followed by the radix-2 DIT inverse DFT without bit reversal to produce final results in the natural order.

Many FFT users, however, prefer natural-order outputs, and a separate, explicit bit-reversal stage can have a non-negligible impact on the computation time, even though bit reversal can be done in O(n) time and has been the subject of much research (e.g. Karp, 1996; Carter, 1998; and Rubio, 2002). Also, while the permutation is a bit reversal in the radix-2 case, it is more generally an arbitrary (mixed-base) digit reversal for the mixed-radix case, and the permutation algorithms become more complicated to implement. Moreover, it is desirable on many hardware architectures to re-order intermediate stages of the FFT algorithm so that they operate on consecutive (or at least more localized) data elements. To these ends, a number of alternative implementation schemes have been devised for the Cooley-Tukey algorithm that do not require separate bit reversal and/or involve additional permutations at intermediate stages.

The problem is greatly simplified if it is out-of-place: the output array is distinct from the input array or, equivalently, an equal-size auxiliary array is available. The Stockham auto-sort algorithm (Stockham, 1966) performs every stage of the FFT out-of-place, typically writing back and forth between two arrays, transposing one "digit" of the indices with each stage, and has been especially popular on SIMD architectures (Swarztrauber, 1982). Even greater potential SIMD advantages (more consecutive accesses) have been proposed for the Pease (1968) algorithm, which also reorders out-of-place with each stage, but this method requires separate bit/digit reversal and O(n log n) storage. One can also directly apply the Cooley-Tukey factorization definition with explicit (depth-first) recursion and small radices, which produces natural-order out-of-place output with no separate permutation step and can be argued to have cache-oblivious locality benefits on systems with hierarchical memory (Singleton, 1967; see also FFTW).

A typical strategy for in-place algorithms without auxiliary storage and without separate digit-reversal passes involves small matrix transpositions (which swap individual pairs of digits) at intermediate stages, which can be combined with the radix butterflies to reduce the number of passes over the data (Johnson & Burrus, 1984; Temperton, 1991; Qian et al., 1994; Hegland, 1994).


References