Main Page | See live article | Alphabetical index

Discrete Hartley transform

A discrete Hartley transform (DHT) is a frequency transform of discrete, periodic data similar to the discrete Fourier transform (DFT), with analogous applications in signal processing etcetera. Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers. Just as the DFT is the discrete analogue of the continuous Fourier transform, the DHT is the discrete analogue of the continuous Hartley transform, introduced by R. V. L. Hartley in 1942.

Because there are fast algorithms for the DHT analogous to the fast Fourier transform (FFT), the DHT was originally proposed by R. N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. It was subsequently argued, however, that specialized FFT algorithms for real inputs/outputs can ordinarily be found with fewer operations than any corresponding algorithm for the DHT.

Table of contents
1 Definition
2 Properties
3 Fast Algorithms
4 References

Definition

Formally, the discrete Hartley transform is a linear, invertible function H : Rn -> Rn (where R denotes the set of real numbers). The n real numbers x0, ...., xn-1 are transformed into the n real numbers h0, ..., hn-1 according to the formula

where π is Pi. The combination cos(z)+sin(z) (= √2 cos(z−π/4)) is sometimes denoted cas(z), and should be contrasted with the cos(z)−isin(z) that appears in the DFT definition (where i is the imaginary unit).

Like for the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention, and differ in some treatments, but do not affect the essential properties.

Properties

The transform can be interpreted as the multiplication of the vector (x0, ...., xn-1) by an n-by-n matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the xk from the hj, is simply the DHT of hj multiplied by 1/n. That is, the DHT is its own inverse, up to an overall scale factor.

The DHT can be used to compute the DFT, and vice versa. For real inputs xk, the DFT output fj has a real part (hj + hn-j)/2 and an imaginary part (hn-j - hj)/2. Conversely, the DHT is equivalent to computing the DFT of xk multiplied by 1+i, then taking the real part of the result.

As with the DFT, a cyclic convolution z = x*y of two vectors x = (xk) and y = (yk) to produce a vector z = (zk), all of length n, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:

where we take all of the vectors to be periodic in n (Xn = X0, etcetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution.

Fast Algorithms

Just as for the DFT, evaluating the DHT definition directly would require O(n2) arithmetical operations (see Big O notation). There are fast algorithms similar to the FFT, however, that compute the same result in only O(n log n) operations. Nearly every FFT algorithm, from Cooley-Tukey to Prime-Factor to Winograd, has a direct analogue for the discrete Hartley transform. (However, a few of the more exotic FFT algorithms, such as Bruun's and the QFT, have not yet been investigated in the context of the DHT.)

In particular, the DHT analogue of the Cooley-Tukey algorithm is commonly known as the fast Hartley transform (FHT) algorithm, and was first described by Bracewell in 1984. This FHT algorithm, at least when applied to power-of-two sizes n, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University.

As mentioned above, DHT algorithms are typically less efficient (in terms of the number of floating-point operations) than the corresponding DFT algorithm specialized for real inputs (or outputs), as demonstrated by Sorensen et al. in 1987. To illustrate this, the following table lists the lowest known operation counts (real multiplications + additions) for the DHT and the DFT, for power-of-two sizes, as achieved by the split-radix Cooley-Tukey FHT/FFT algorithm in both cases:

size n DHT (split-radix FHT) DFT (split-radix FFT) for real inputs
40+8=80+6=6
82+22=242+20=22
1612+64=7610+60=70
3242+166=20834+164=198
64124+416=54098+420=518
128330+998=1328258+1028=1286
256828+2336=3164642+2436=3078
5121994+5350=73441538+5636=7174
10244668+12064=167323586+12804=16390
Lowest known operation counts (real mults + adds) for power-of-two DHT and DFT algorithms. (Note that, depending on implementation details, some of the multiplications can be traded for additions or vice versa.)

On the other hand, the redundant computations in FFTs due to real inputs are much more difficult to eliminate for large prime n, despite the existence of O(n log n) complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a prime-size complex-data FFT algorithm such as Rader's algorithm can be directly applied to the DHT of real data, for roughly a factor of two less computation than that of the equivalent complex FFT. Thus, the DHT may provide the most efficient practical pathway to computing real-data DFTs of large prime sizes (Frigo and Johnson, 2003).


References