## 1. Introduction

In this tutorial, we’ll discuss the Ackermann function and the problems associated with its computation.

We’ll first study its definition and calculate its output for small values of the input. Then, we’ll provide an analytical definition in terms of iterated exponentiation and its iteration.

At the end of this tutorial, we’ll know the definition of the Ackermann function and understand the reasons for its peculiar characteristics.

## 2. Definition of the Ackermann Function

The Ackermann function is generally indicated with , and it’s a well-known binary function. Its two input terms can assume any non-negative integer value. The Ackermann function is thus defined:

This definition is straightforward in its essence. Surprisingly, though, it leads to fascinating and unexpected behavior as the input to the function grows.

## 3. The Ackermann Function for Small Input

The definition by itself may not be very informative to us: a good method for understanding this function is, therefore, to calculate some of its values. Here, we start from the lowest values of its input.

Let’s incrementally create a table that holds the values of , as and vary. The simplest case is , where the function assumes the value of :

The second simplest case is for , where the function assumes the value to the top-right of the corresponding cell. This is, intuitively, the meaning we assign to the second case of the definition in terms of the cells of our table.

We can precompute for a few values of , and then fill the first two columns accordingly. For simplicity, we’re now filling the table up to the fourth row, corresponding to :

## 4. The Depth of the Recursion

For higher values of the input, computing the Ackermann function becomes increasingly harder. To see why this is the case, it makes sense to study how to compute the solution step-by-step for a relatively small input value.

Let’s look for example at the process of calculating . Because and , we fall on the third case of the analytical definition. We, therefore, need to compute:

The second term is a call to the function itself, that we need to resolve first. Because falls again into the third case of the analytical definition, we need to replace it with :

The rightmost operator has now a form that corresponds to the second case of the analytical definition. It therefore equates :

Because the last operator now corresponds to the first case of the definition, we can replace it with its corresponding value :

The rightmost operator is also known to us, and it evaluates to :

Finally, we apply again the definition for the first case and we can find the value for , which we assign to the value of . If we define the function as a program, all in all, to compute , we had to perform 6 calls to that function.

## 5. Down the Rabbit Hole

Note that the number of recursive calls to grows very fast. This is, in fact, the reason for the strange behavior that we observe from the fifth row of our table and onward. These are the values associated with when :

In the table above, the double arrow indicates the iterated exponentiation, expressed with Knuth’s notation. Notice that, from onward, the function always evaluates to numbers so large that they can’t be easily written down. For every subsequent increment of after , we iterate the iterated exponentiation one more time:

Finally, because the recursion is significantly deep, the Ackermann function finds frequent usage as a metric for the performance of compilers. It’s also a good benchmark to test the efficient allocation of memory.

## 6. Conclusions

In this tutorial, we studied the Ackermann function and the problems associated with its computation.