In the divide and conquer strategy, we solve a problem recursively by applying three steps at each level of the recursion: Divide, conquer, and combine.
In this tutorial, we’re going to explore them in detail.
2. Steps for Divide and Conquer Algorithms
“Divide” is the first step of the divide and conquer strategy. As suggested by the name, in this step we divide the problem into smaller subproblems until the problem is small enough to be solved. At this step, sub-problems become smaller but still represent some part of the actual problem.
As mentioned above, we use recursion to implement the divide and conquer algorithm. A recursive algorithm calls itself with smaller or simpler input values. We call that the recursive case. So, when we implement the divide step we determine the recursive case which will divide the problem into smaller subproblems.
Then we have the “conquer” step where we straightforwardly solve the subproblems. By now, we have already divided the input into the smallest possible parts and we’re now going to solve them by performing basic operations.
We implement the conquer step with recursion by specifying the recursive base case. Once the subproblems become small enough that we no longer recurse, we say that the recursion “bottoms out” and that we’ve gotten down to the base case. When we arrive at the base case, we solve the subproblem.
Finally, we arrive at the last step of the divide and conquer strategy – “combine”.
In this step, we combine the solution of the subproblems to solve the whole problem. The output returned from solving the base case will be the input of larger subproblems.
So after we reach the base case we’ll start to go up to solve larger subproblems with input returned from smaller subproblems. In this step, we merge output from the conquer step to solve bigger subproblems. We’ll propagate from the bottom up until we solve the whole original problem.
2.4. Entire Flow
In order to visualize the strategy of divide and conquer, let’s have a look at the flowchart of the entire procedure:
As we can see, we first divide the problem, then we conquer it and combine it into a full solution.
3. The Complexity of Divide and Conquer Algorithms
When an algorithm contains a recursive call to itself, we can often describe its running time by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs.
A recurrence for the running time of a divide and conquer algorithm falls out from the three steps of the basic paradigm.
As always, we let be the running time on a problem of size . If the problem size is small enough, say for some constant , the straightforward solution takes constant time, which we write as .
Suppose that our division of the problem yields subproblems, each of which is the size of the original. It takes time to solve one subproblem of size , and so it takes time to solve of them.
If we take time to divide the problem into subproblems and time to combine the solutions to the subproblems into the solution to the original problem, we get the recurrence:
4. Advantages of Divide and Conquer Algorithms
The first, and probably the most recognizable benefit of the divide and conquer paradigm is the fact that it allows us to solve difficult, sometimes even NP problems.
Being given a difficult problem can often be discouraging if there is no idea how to go about solving it. However, with the divide and conquer method, it reduces the degree of difficulty since it divides the problem into easily solvable subproblems.
Another advantage of this paradigm is that it often plays a part in finding other efficient algorithms. In fact, it played a central role in finding the quick sort and merge sort algorithms.
It also uses memory caches effectively. The reason for this is the fact that when the subproblems become simple enough, they can be solved within a cache, without having to access the slower main memory, which saves time and makes the algorithm more efficient.
And in some cases, it can even produce more precise outcomes in computations with rounded arithmetic than iterative methods would.
In the divide and conquer strategy we divide problems into subproblems that can be executed independently from each other. Thus, making this strategy suited for parallel execution.
5. Disadvantages of Divide and Conquer Algorithms
One of the most common issues with this sort of algorithm is the fact that the recursion is slow, which in some cases outweighs any advantages of this divide and conquer process.
Another concern with it is the fact that sometimes it can become more complicated than a basic iterative approach, especially in cases with a large n.
In other words, if someone wanted to add large numbers together, if they just create a simple loop to add them together, it would turn out to be a much simpler approach than it would be to divide the numbers up into two groups, add these groups recursively, and then add the sums of the two groups together.
6. Practical Examples
6.1. The Binary Search Algorithm
Binary search is a divide-and-conquer algorithm. As all divide and conquer algorithms, it divides the array into two smaller subarrays. Next, it discards one of the subarrays and continues the search in other subarrays.
We compare the search key with the element in the middle of the array. This comparison decides which subarray to discard. If the value of the search key is less than the item in the middle of the array that it narrows the interval to the lower subarray. Otherwise, we narrow it to the upper subarray.
To use this algorithm the array must be sorted.
6.2. The Merge Sort Algorithm
The merge sort algorithm closely follows the divide and conquer paradigm. In the merge sort algorithm, we divide the n-element sequence to be sorted into two subsequences of n=2 elements each. Next, we sort the two subsequences recursively using merge sort. Finally, we combine the two sorted subsequences to produce the sorted answer.
In this article, we explored the strategy of divide and conquer, with the use of recursion.