The **Ackermann function** (also known as **Ackermann-Peter function**), an important example in the theory of computation, is a recursive function which takes two natural numbers as arguments and returns a natural number as its value.

Table of contents |

2 Recursive, but not primitive recursive 3 Table of values 4 Intuitive explanation 5 Explicit description 6 History 7 Inverse 8 Use as benchmark 9 External links |

The Ackermann function is defined by recursion as follows:

*A*(0,*n*) =*n*+ 1 for*n*≥ 0*A*(*m*, 0) =*A*(*m*- 1, 1) for*m*≥ 1*A*(*m*,*n*) =*A*(*m*- 1,*A*(*m*,*n*- 1)) for*m*,*n*≥ 1

The Ackermann function grows extremely fast; *A*(4,2) is already about 2.0035299*10^{19728}. This extreme growth can be exploited to show that the computable function *f*(*n*) = *A*(*n*, *n*) grows faster than any primitive recursive function and is therefore not primitive recursive.

m\\n | 0 | 1 | 2 | 3 | 4 | n |
---|---|---|---|---|---|---|

0 | 1 | 2 | 3 | 4 | 5 | n+1 |

1 | 2 | 3 | 4 | 5 | 6 | n+2 |

2 | 3 | 5 | 7 | 9 | 11 | 2n+3 |

3 | 5 | 13 | 29 | 61 | 125 | 8*2^n-3 |

4 | 13 | 65533 | 2^{65536}-3 | A(3,2^{65536}-3) | A(3,A(4,3)) | 2^{2...2}-3 (n+3 terms) |

5 | 65533 | A(4,65533) | A(4,A(5,1)) | A(4,A(5,2)) | A(4,A(5,3)) | |

6 | A(5,1) | A(5,A(5,1)) | A(5,A(6,1)) | A(5,A(6,2)) | A(5,A(6,3)) |

Also, Graham's number does not appear in any of the low-numbered rows, as it cannot be written with any small (or, indeed, recordable) number of Knuth arrows.

A(1, 2)Now let us attempt a more complex case - specifically A(4,3), the first value which cannot be recorded as a decimal expansion in the physical universe.=A(0, A(1,1)) =A(0, A(0, A(1,0))) =A(0, A(0, A(0,1))) =A(0, A(0, 2)) =A(0, 3) =4

A(4, 3)=A(3, A(4, 2)) =A(3, A(3, A(4, 1))) =A(3, A(3, A(3, A(4, 0)))) =A(3, A(3, A(3, A(3, 1)))) =A(3, A(3, A(3, A(2, A(3, 0))))) =A(3, A(3, A(3, A(2, A(2, 1))))) =A(3, A(3, A(3, A(2, A(1, A(2, 0)))))) =A(3, A(3, A(3, A(2, A(1, A(1, 1)))))) =A(3, A(3, A(3, A(2, A(1, A(0, A(1, 0))))))) =A(3, A(3, A(3, A(2, A(1, A(0, A(0, 1))))))) =A(3, A(3, A(3, A(2, A(1, A(0, 2)))))) =A(3, A(3, A(3, A(2, A(1, 3))))) =A(3, A(3, A(3, A(2, A(0, A(1, 2)))))) =A(3, A(3, A(3, A(2, A(0, A(0, A(1, 1))))))) =A(3, A(3, A(3, A(2, A(0, A(0, A(0, A(1, 0)))))))) =A(3, A(3, A(3, A(2, A(0, A(0, A(0, A(0, 1)))))))) =A(3, A(3, A(3, A(2, A(0, A(0, A(0, 2)))))) =A(3, A(3, A(3, A(2, A(0, A(0, 3))))) =A(3, A(3, A(3, A(2, A(0, 4))))) =A(3, A(3, A(3, A(2, 5))))

This is about the simplest the expansion will be for some time, and it is now obvious why values of the function like those in the table above are very seldom calculated directly. It is also interesting to note how many steps are needed to simplify even the simplest-looking Ackermann expressions - each line in the example above is a single application of the appropriate one of the 3 parts of the definition of the Ackermann function. At this point, if we skip many logical steps, we would be evaluating A(2, 5) to 13, and then trying to evaluate A(3, 13), which is 8179. Then the next call to A(3, 8179) returns a number which is equal to the number of atoms in the universe raised to a power of several dozen. This number *n* is then input into the computation of the final A(3, *n*) call, eventually expanding that call to an expression of the form A(2, A(2, A(2, ... A(2, 0) ...))) which cannot be written explicitely in the physical universe.

The truly amazing aspect of the Ackermann is that the only actual computation (rather than recursive calling) occurring is the computation of A(*m*,0), which simply increments *m* and returns the result.

Intuitively, the Ackermann function defines generalizations of multiplication by two (iterated additions) and exponentiation with base 2 (iterated multiplications) to iterated exponentiation, iteration of this operation and so on. It can be most concisely expressed nonrecursively using Conway's arrow:

In 1928, Wilhelm Ackermann considered a function *A*(*m*, *n*, *p*) of three variables, the *p*-fold iterated exponentiation of *m* with *n*, and proved that it is a recursive function which is not primitive recursive. This definition was later simplified by Rosza Peter and Raphael Robinson to the two-variable definition given above.

Since the function *f*(*n*) = *A*(*n*,*n*) considered above grows very rapidly, its inverse grows very slowly; this inverse occurs in the run-time analysis of certain algorithms. In this and other contexts, the Ackermann function is often slightly redefined to a function with similiar asymptotic behavior, but without the -3 terms, or using the powers of 2 for row 0 (and hence omitting rows 0-2). These modified functions are not equal to the Ackermann function, but in terms of algorithm analysis, they can be regarded as being so. The inverse of the function f is less than 4 for any conceivable input size, so for practical algorithm analysis, it can be regarded as a constant.

The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. For example, a compiler which is able to "realize" that A(3,30) is slightly less than a power of 2, or which is able to save intermediate values like the A(3,n) and A(2,n) in that calculation rather than recomputing them, can speed up computation of A(3,30) by a factor of hundreds of thousands. Also, if A(1,n) is computed directly rather than as a recursive expansion of the form A(1,A(1,A(1,...A(1,0)...))), this will save significant amounts of time. Computing A(1,n) takes linear time in n. Computing A(2,n) requires quadratic time, since it expands to O(n) nested calls to A(1,i) for various i. Computing A(3,n) requires time proportionate to 4^{n+1}, according to several web sources.

It may be noted that A(4,2), which appears as a decimal expansion in several web pages, cannot possibly be computed by recursive application of the Ackermann function.

- Some values of the Ackermann function
- example use of the Ackermann function as a benchmark - note the huge number of function calls used in computing low values