## 1. Introduction

In this tutorial, we’ll examine the definition, complexity, and **algorithms to generate permutations of an array**.

## 2. Permutations

The permutation we’ll be talking about here is **how to arrange objects in positions.**

For example, suppose we’re playing a game where we have to find a word out of the following three letters: A, B, and C. So we try all permutations in order to make a word:

From these six permutations, we see that there is indeed one word: .

However, we have a slight semantic problem within this word problem. Colloquially we often say, “How many 3 letter combinations can we make?” The problem is, are combination and permutation interchangeable? The mathematical answer is no.

Simply put, **a permutation has to do with an ordered set of numbers**, exactly like setting up words. **A combination deals with an unordered set**. Take a pair of dice for example. When we roll them, we’re only interested in the sum. We don’t label them. The combination of 3 and 4 is the same as the combination of 4 and 3.

### 2.1. How Many Permutations Are There?

The number of permutations of numbers is ( factorial). So for three objects, the number of permutations is :

Intuitively, we can think of the process of generating permutations as a recursive procedure. For the first position, we have possibilities (3 in the picture). For the second position, we have only possibilities to choose from (2 in the picture). As we fill each position, we lose another possibility. Therefore, we have possibilities. This is the definition of ** factorial**, written .

### 2.2. Fast Growing Permutations

As we saw in the last example, there are six possibilities for three objects. This is manageable, **b****ut as the number of objects increases, the number of permutations increases exponentially**:

We can see that if we have a set of 4, 5, or even 6 letters, we would have 24, 120, or even 720 permutations to sift through. The number of permutations grows rapidly.

To illustrate how big these numbers are, suppose we start with a deck of cards. The number of combinations is a 68 digit number (we didn’t calculate this ourselves, someone else did):

80658175170943878571660636856403766975289505440883277824000000000000.

The age of the universe is approximately 10^{13.813} years old. This equals 2.05160945 × 10^{21} seconds or 2.05160945 × 10^{30} nanoseconds. Even if we could find a dealer in Las Vegas who could shuffle the cards once every nanosecond, he would still not even come close to all the possible combinations before the end of the universe.

Furthermore, the amount of time it takes us to generate all permutations is not our only limitation. For example, if we were to write a program to generate these permutations recursively (see below), we would quickly run out of memory space.

Although this is bad news for those of us who want to generate all the possible permutations, it is good news for encryption. For instance, the standard 256-encryption key has 1.1 x 10^{77} combinations of zeros and ones. It would take us several lifetimes of the universe to try to figure out the key. We have to rely on other methods of finding a password, such as guessing the owner’s dog’s name or “qwerty.”

### 2.3. Mathematical Notation

A common **mathematical notation for a single permutation is the two-line notation**. Here we represent the original array on the first line, and how the elements are transformed on the second line:

This represents the permutations:

However, if we order the elements in canonical order, then we can write the permutation as one line. In our example, the canonical order of the elements is the natural numbers .

So we can write the permutation as .

Another notation that we often use sheds some light on the structure of permutations. This notation is called the cyclic notation. With this notation, we can see that permutations are represented in sets of “cycles.” A cycle is a set of permutations that cycle back to itself. For our permutation, we can see there are two cycles. The first cycle is:

Notice that 1 permutes to 2, and 2 permutes to 5, but then 5 permutes back to 1 again. We have a cycle:

The rest of the permutation is also a cycle, where 3 permutes to 4, and then 4 permutes back to 3:

Putting these cycles together, we get the equivalent one line cyclic notation:

We can put all permutations in this notation. However, one problem is that this notation is not unique:

We can remedy this situation by putting the largest element first:

This is called the canonical cyclic notation.

## 3. Simple Recursive Algorithm

As we can see in the picture and explanation in the last section, **generating permutations can be formulated in a simple recursive algorithm**. At each recursion step, we have the permutation we generated thus far and the set of remaining objects to permute.

We’re done once there are no objects left to permute (the remaining object list is empty). The answer is the permutation generated thus far. We just add this permutation to the accumulated list of generated permutations and return back in the recursion.

If there are objects left to permute, then we loop over all the remaining objects. Within the loop, we add the chosen object to the end of the given permutation. We take the chosen object away from the remainder list, and recursively call with the new permutation and new remainder list:

The initial call to this routine is with an empty list of generated permutations (), an empty permutation (), and the list of objects (). As a check, we can see the recursion terminates because at each recursive call, the remainder list gets smaller.

The generated permutations () and remaining objects () structures can be lists, sets, or arrays. The objects in these do not have to be ordered. On the other hand, order is important for the permutation (), so it should be an array.

## 4. Heap’s Algorithm

One of the more traditional and effective algorithms used to generate permutations is the method developed by B. R. Heap. **This algorithm is based on swapping elements to generate the permutations**. It produces every possible permutation of these elements exactly once.

This method is a systematic algorithm, which at each step chooses a pair of elements to switch in order to generate new permutations. We see that the advantage of this algorithm, as opposed to the previous algorithm, is that we use less memory.

**The principle of Heap’s algorithm is ****decrease and conquer**. The algorithm basically generates all the permutations that end with the last element. Then the (n-1)! permutations of the first n-1 elements are adjoined to this last element. While looping over the n-1 elements, there is a (mystical) step to the algorithm that depends on whether is odd or even.

- If is odd, swap the first and last element.
- If is even, then swap the th element (in the loop).

In each iteration, the algorithm will produce all the permutations that end with the current last element.

For example, for four elements, the sequence is as follows (columns from left to right):

In row A, we see the ‘last element’. In rows B, C, and D, we have the permutations of the remaining three elements. By looking at row B, we can see the last two elements are permuted.

A more complete explanation, with examples, can be found by Ruslan. For the more mathematically inclined, there is also proof as to why Heap’s algorithm works. In Robert’s Sedgewick’s 1977 review of permutation algorithms, Heap’s algorithm was found to be one of the simplest and most efficient.

Although Heap’s original formulation was non-recursive, Heap’s algorithm can be formulated in a recursive or non-recursive manner.

### 4.1. Recursive Heap’s Algorithm

We can write a recursive version of Heap’s Algorithm:

Note that swaps the value of with the value of the .

An important implementation note concerns the array, . In one sense, is a static array. This means that in the recursive call, the changes to the array that occur in the sub-calls remain when returning from the calling function.

**However, this implementation may swap the elements with the same index.** For example, when equals or . This is an intentional simplification for demonstration purposes.

### 4.2. Non-Recursive Heap’s Algorithm

We can also define a non-recursive Heap’s algorithm that is derived from the recursive. The comments within the code hint at the correspondence:

## 5. QuickPerm Algorithm

Though Heap’s algorithm is traditionally the permutation algorithm of choice, there are other more recent algorithms. **QuickPerm, also based on swapping and inspired by Heap sort, is one of the most efficient**. We can consult the QuickPerm site for the variations and implementations, but we’ll present the pseudo-code here for one of the versions (countdown QuickPerm).

The primary index controller array in the QuickPerm algorithm above is p[N], which controls iteration and the upper index boundary for variable . Each represents the counting base: is also zero, is base 2, is base 3, etc. We use this array to keep track of the generation process. All permutations are formed of the ‘lower’ elements until the next element is considered.

Here is an example of the development of the p matrix with the permutations:

Here we can see how the lower (to the left) permutations develop first. For our case of , when the first three p elements are zero, then we have developed all permutations.

The full algorithm is as follows:

There are implementations of the QuickPerm algorithm in JAVA, python, C++, and Go, for example.

## 6. Permutations in Lexicographic Order

Lexicographic order is a generalization of, for instance, alphabetic order. **The key to establishing lexicographic order is the definition of a set of ordering functions (such as , , and )**. We can define these functions in any way appropriate for the data type.

If a set of functions is given instead of the usual >, <, and == operators (or overridden in object-oriented languages), the array can be an arbitrary object. For example, suppose we had an array of structures representing peoples’ names. There would be two fields, first name and last name. The ordering function would look at the last name first. If two people had the same last name, then the ordering function would look at the first name.

The lexicographic order algorithm, formulated by in A Discipline of Programming (1976), can be formulated as follows:

- Find largest index such that < (If no such exists, then this is already the last permutation)
- Find largest index such that and
- Swap and
- Reverse the suffix starting at

This algorithm returns the next lexicographic permutation. If the input is the greatest, then the array is unchanged and is returned. Interestingly, if we have repeated elements, the algorithm will skip over them to find the next in the series.

## 7. Conclusion

In this article, we reviewed permutations and the algorithms that we can use to compute them. We found it important to have an efficient implementation since the number of permutations rises rapidly with the number of elements. As a result, we prefer non-recursive methods as recursion inherently uses a lot of memory space.

We presented two such methods: Heap’s sorting algorithm, and the QuickPerm algorithm. Though these algorithms produce permutations in no particular order, we presented another class of permutation algorithm that gave permutations in lexicographic order.