Main Page | See live article | Alphabetical index

Quicksort

Quicksort is a sorting algorithm that, on average, needs O(n log(n)) comparisons to sort n items. Its inner loop is inherently very fast on nearly all computers, which makes it significantly faster than other O(n log n) algorithms that can sort in place or nearly so in the average case (quicksort is commonly thought to be an in-place algorithm, meaning that it has only constant or O(1) storage overhead, but in fact this is not true -- most versions require O(log n) stack space to support their divide-and-conquer recursion).

Table of contents
1 Performance and algorithm details
2 Introsort
3 Relationship to selection
4 Implementations
5 External link
6 References

Performance and algorithm details

Quicksort's worst-case performance is O(n2): much worse than some other sorting algorithms such as heapsort or merge sort. Because of its good average performance and simple implementation, Quicksort is one of the most popular sorting algorithms in use. Quicksort was developed by C. A. R. Hoare. It is an "unstable" sort in that it doesn't preserve any ordering there already is between elements of the same value.

The Quicksort algorithm uses a recursive divide and conquer strategy to sort a list. The steps are:

  1. pick a pivot element from the list.
  2. reorder the list so that all elements less than the pivot precede all elements greater than the pivot.
  3. recursively sort the sub-list of elements less than the pivot and the sub-list of elements greater than the pivot. If one of the sub-lists is empty, it can be ignored.

The inner loop which performs the partition is often very amenable to optimization as all comparisons are being done with a single pivot element. This is one reason why Quicksort is often the fastest sorting algorithm, at least on average.

The most crucial problem of Quicksort is the choice of pivot element. A naive implementation of Quicksort, like the ones below, will be terribly inefficient for certain inputs. For example, if the pivot always turns out to be the smallest element in the list, then Quicksort degenerates to Selection sort with Θ(n2) running time. A secondary problem is the recursion depth. This becomes linear, and the stack requires O(n) extra space. Quicksort can be re-written to be partially recursive. After the partitioning phase there are two sub-lists to be sorted. The shorter sub-list is sorted by a recursive call to Quicksort. The larger one is sorted by the main function using a tail-recursion elimination loop.

Choosing a better pivot

The worst-case behavior of quicksort is not merely a theoretical problem. When quicksort is used in web services, for example, it is possible for an attacker to deliberately exploit the worst case performance and choose data which will cause a slow running time or maximize the chance of running out of stack space. See competitive analysis for more discussion of this issue.

Sorted or partially sorted data is quite common in practice and the naive implementation which selects the first element as the pivot does poorly with such data. To avoid this problem the middle element can be used. This works well in practice but attacks can cause worst-case performance.

A better optimization can be to select the median element of the first, middle and last elements as the pivot. Adding two randomly selected elements resists chosen data attacks, more so if a cryptographically sound random number generator is used to reduce the chance of an attacker predicting the "random" elements. The use of the fixed elements reduces the chance of bad luck causing a poor pivot selection for partially sorted data when not under attack. These steps increase overhead, so it may be worth skipping them once the partitions grow small and the penalty for poor pivot selection drops.

Finding the true median value and using it as the pivot can be done if the number of elements is large enough to make it necessary but this is seldom done in practice.

Other optimizations

Another optimization is to switch to a different sorting algorithm once the list becomes small, perhaps ten or less elements. Selection sort might be inefficient for large data sets, but it is often faster than Quicksort on small lists.

One widely used implementation of quicksort, that in the 1997 Microsoft C library, used a cutoff of 8 elements before switching to insertion sort, asserting that testing had shown that to be a good choice. It used the middle element for the partition value, asserting that testing had shown that the median of three algorithm did not, in general, increase performance.

Sedgewick (1978) suggested an enhancement to the use of simple sorts for small numbers of elements, which reduced the number of instructions required by postponing the simple sorts until the quicksort had finished, then running an insertion sort over the whole array.

LaMarca and Ladner (1997) consider "The Influence of Caches on the Performance of Sorting", a very significant issue in microprocessor systems with multi-level caches and high cache miss times. They conclude that while the Sedgewick optimization decreases the number of instructions, it also decreases locality of cache references and worsens performance compared to doing the simple sort when the need for it is first encountered. However, the effect was not dramatic and they suggested that it was starting to become more significant with more than 4 million 64 bit float elements. This work is cited by Musser, following. Their work showed greater improvements for other sorting types.

Introsort

A variation on quicksort which is becoming widely used is introspective sort, often called introsort (Musser 1997). This starts with quicksort and switches to heapsort when the recursion depth exceeds a preset value. This overcomes the overhead of increasingly complex pivot selection techniques while ensuring O(n log n) worst-case performance. Musser reported that on a median-of-3 killer sequence of 100,000 elements running time was 1/200th that of median-of-3 quicksort. Musser also considered the effect of Sedgewick delayed small sorting on caches, reporting that it could double the number of cache misses but that its performance with double-ended queues was significantly better and it should be retained for template libraries, in part because the gain in other cases from doing the sorts immediately was not great.

The June 2000 SGI C++ Standard Template Library stl_algo.c implementation of unstable sort uses the Musser introsort approach with the recursion depth to switch to heapsort passed as a parameter, median-of-3 pivot selection and the Sedgewick final insertion sort pass. The element threshold for switching to the simple insertion sort was 16.

The C++ STL implementations generally significantly (several times as fast) outperform the C implementation because they are implemented to allow inlining, while the generic C equivalent must use function calls for comparisons. This advantage could be compensated for by using custom versions of the sort function, at the cost of losing the advantage of a totally generic library function. An example of the importance of constant and architecture factors.

The most direct competitor of Quicksort is heapsort. Heapsort is typically somewhat slower than Quicksort, but the worst-case running time is always O(n log n). Quicksort is usually faster, though there remains the chance of worst case performance except in the introsort case. If it's known in advance that heapsort is going to be necessary, using it directly will be faster than waiting for introsort to switch to it.

Relationship to selection

A simple selection algorithm, which chooses the kth smallest of a list of elements, works nearly the same as quicksort, except instead of recursing on both sublists, it only recurses on the sublist which contains the desired element. This small change changes the average complexity to linear or O(n) time. A variation on this algorithm actually locates the median at every step, bringing the worst-case time down to O(n) as well.

Implementations

A simple implementation of Quicksort on an array of integers in C:

void quicksort(int * low, int * high)
{
  /* We naively use the first value in the array as the pivot */
  /* this will not give good performance real usage */
  
  int * lowbound = low + 1;       /* the high boundary of the low subarray */
  int * highbound = high - 1;     /* the low boundary of the high subarray */
  int temp;
  
  while(lowbound <= highbound) /* partition the array */
  {
     if(*lowbound < *low)         /* compare to pivot */
       lowbound++;                /* move lowbound toward the middle */
     else
     {
        temp = *lowbound;         /* swap *lowbound and *highbound */
        *lowbound = *highbound;
        *highbound = temp;
        highbound--;              /* move highbound toward the middle */
     }
  }
  
  highbound++;                    /* move bounds back to the correct positions */
  lowbound--;
  
  temp = *low;                    /* move the pivot into the middle */
  *low = *lowbound;
  *lowbound = temp;
  
  if(low != lowbound)             /* recurse on the subarrays */
    quicksort(low, lowbound);
  if(high != highbound)
    quicksort(highbound, high);
}

Here's an implementation in Python, which uses more efficient partitioning for optimization:

def partition(array, start, end, compare):
   while start < end:
       # at the top of this loop, our partition element is at 'start'
       while start < end:
           if compare(array[start], array[end]):
               (array[start], array[end]) = (array[end], array[start])
               break
           end = end - 1
       # at the top of this loop, our partition element is at 'end'
       while start < end:
           if compare(array[start], array[end]):
               (array[start], array[end]) = (array[end], array[start])
               break
           start = start + 1
   return start

def quicksort(array, compare=lambda x, y: x > y, start=None, end=None):
   """The fastest exchange sort for most purposes."""
   if start is None: start = 0
   if end is None: end = len(array)
   if start < end:
       i = partition(array, start, end-1, compare)
       quicksort(array, compare, start, i)
       quicksort(array, compare, i+1, end)

Here's a very short version written in the functional programming language Haskell:

quicksort :: (Ord a) => [a] -> [a]
quicksort []           = []
quicksort (pivot:rest) = (quicksort [y| y<-rest, y=pivot])

Note the use of list comprehensions, and also that this uses the head of the list as a pivot element, so it does not give optimum results if the list is already nearly sorted (improvements can be made by preselecting a more suitable pivot before the main quicksort function). This version is very easy to understand, as it simply recursively sorts the lists formed by filtering all the items less than the head, and all the items greater than the head, and inserts the singleton list containing the head item in between. The code is almost self explanatory. (Unfortunately, for existing Haskell implementations, it is inefficient as copying and concatenation of many small list makes this quicksort perform on average. Future Haskell implementations may perform optimizations "behind the scenes", but currently these are not required by the language.)

External link

References