## 1. Overview

In this tutorial, we’ll study how to sort a list in the worst possible ways. Yes, the worst ways.

## 2. The Worst Algorithms

A less known but important branch of the science of algorithms is dedicated to the study of the poorest-performing methods for sorting lists.

There’s an evident intellectual fascination in the study of how to do things in the worst possible manner. But also, there’s an educational or pedagogical value in learning how to do things badly. When we know how things shouldn’t be done, we concurrently know how things should be done instead.

Further, there are some cases in which we specifically want the poorest performance in an algorithm. This is when the computation has a real-world semantic meaning of some kind, which suggests that its maximum benefit or utility arises from the longest computational time.

Let’s suppose we have a maze to traverse. Normally, in graph theory, we study the quickest or shortest path that performs this task: If, however, the maze is a real-world labyrinth that’s particularly beautiful, we may derive aesthetic pleasure in taking the longest path instead: The same argument is valid for sorting algorithms. The intellectual challenge to sort a list in a particularly complex way may make a worse-performing algorithm intellectually pleasing and satisfying to the programmer that builds it.

### 2.2. Pedagogical Value and Benchmarks

Studying algorithms with the worst performances also has the pedagogical value of teaching us to think outside of the box when building them. Many of the worst-performing ones, as we’ll see shortly, would at first glance be considered as never-ending. Some unexpected termination conditions, however, can be found if we’re creative enough.

There’s practical value in understanding this idea when studying modern computer science. If one considers that the computation takes place in a physical medium, one can subsequently exploit physical, not algorithmic constraints, that eventually cause the computation to terminate. This is particularly important when we study the termination conditions of quantum algorithms.

In the termination analysis of quantum programs, one typical physical phenomenon that causes their termination is quantum decoherence, which is a purely physical, not algorithmic process. If we learn to shift the focus of algorithmic analysis from the procedural or mathematical aspects of it, up to the physical embeddedness of computing systems, this helps us transition from the study of classical to quantum computation.

Finally, a worst-performing algorithm is also useful as a benchmark for some other algorithms that we’re developing. If we manage to develop one that performs worse than the theoretical worst, chances are that our customers won’t be happy with it.

## 3. Bogosort

### 3.1. Definition of Bogosort

The universally-acclaimed worst sorting algorithm is Bogosort, sometimes called Monkey Sort or Random Sort, for reasons we’ll see shortly. Bogosort develops from the idea that, in probability theory, if a certain phenomenon is possible, then it will eventually happen.

Bogosorts starts with an array of elements: It then checks whether the elements are sorted, which we assume to take time. If they aren’t, Bogosort randomizes their position: It then checks again whether the array is sorted, and repeats the randomization otherwise. This process then iterates as many times as necessary, and will eventually terminate when we check the array and find it sorted: ### 3.2. Computational Time

If the sorted array is strictly monotonic, then the probability for any given randomization to be the one we want is . This means that the expected computational time is , which makes Bogosort feasible only for very low values of : There’s also no guarantee that we’ll ever find a solution in any finite amount of time. For this reason, the worst-case scenario for the computational time of Bogosort is .

## 4. Can We Do Worse?

We can do better though: that is, we can do worse. Here we study some procedures that outperform Bogosort in their inefficiency, while still eventually terminating with a sorted array.

### 4.1. Exploiting Soft Errors

The first method exploits the so-called soft errors in electronic systems. If a semiconductor memory chip encounters an ionizing particle, this may lead to a perturbation of the chip’s state and to a subsequent alteration of the stored datum. If we imagine that the array is stored in a memory chip and that all of its parts are likely to encounter such a particle, we can then develop a sorting algorithm accordingly:

• first, we check whether the array is in order
• if it isn’t, we wait for some time and then test it again

At some point, enough single-event upsets will have taken place. As a consequence of them, the memory chip will finally contain a sorted array.

### 4.2.  The Universe Has Intrinsic Order

The second method derives from a reflection on what it means to sort an array. Of course, an array is sorted if it contains elements that are in some order. The question however becomes: according to what criterion do we decide what order to apply?

The theory of intelligent design states that the universe possesses an intrinsic order. We know that for any list of elements, the probability of observing that particular order is infinitesimally small. In fact, it’s exactly .

This is so minuscule that we can infer, with proper Bayesian reasoning, that some unknown implicit order causes that particular pattern to be observed. This order, of course, isn’t intelligible by humans, but it exists nonetheless. As a consequence, we can develop this intelligent design algorithm:

• first, we consider an array of elements
• then, because the universe has an intrinsic order, we understand that the array also has one

This is the only sorting algorithm that we can execute in time, and in fact, sorts the elements in place and without performing any operations at all.

### 4.3. An Everettian Sorting

Finally, we can also develop an algorithm that’s based on the Everettian interpretation of quantum mechanics. Because the memory of a classical computer, albeit decohered, is still a quantum system, we can then develop a quantum version of Bogosort:

• first, we randomize the array
• then, if the array isn’t in the correct order, we destroy the universe

If we’re in a lucky universe, where the sorted array exists, the algorithm runs only once. If we’re unlucky, its execution takes all the time in the world. In fact, the algorithm ends as time itself also ends.

## 5. Conclusion

In this article, we studied sorting algorithms that are even worse than Bogosort.

1 Comment
Inline Feedbacks