1. Introduction

In this tutorial, we’ll derive the complexity of binary search.

Binary search is an efficient algorithm for finding the position of an item in a sorted array.

So, in this problem, we have a sorted array a (such that a_1 \leq a_2 \leq \ldots a_n) and a search key x. We want to find if x is in a and at what position.

The assumption is the input array is sorted, which requires its elements to be comparable. So, there has to be an ordering relation on the class of elements from which we draw x and a.

3. The Algorithm

Binary search compares x to the middle element a_{middle} of the array and focuses on the left or right subarrays depending on whether x < a_{middle} or x > a_{middle}. If a_{middle} = x, we output middle:

Here’s the pseudocode:

algorithm BinarySearch(a, x):
    // INPUT
    //     a = a sorted array, indexed from 1 to n, where a[i] <= a[i+1]
    //     x = the search key
    // OUTPUT
    //     the index of x in a if x is in a, or 'failure' otherwise

    left <- 1
    right <- n
    middle <- (left + right) / 2

    while left <= right:
        if x < a[middle]:
            // All the elements a[middle], ..., a[right] are > x
            right <- middle - 1
        else if x > a[middle]:
            // All the elements a[left], ..., a[middle] are < x
            left <- middle + 1
        else:
            // x = a[middle]
            return middle
        middle <- (left + right) / 2

   // x not in a
   return 'failure'

So, the search stops when there are no more elements to search, or we find the search key.

4. Complexity

Let’s call a[left, left + 1, \ldots, right] the active part of the array. Since middle = \lfloor \frac{left+right}{2} \rfloor, we halve the active part in each iteration:

Binary search and halving the input array

In the worst case, x isn’t in the array, so the number of iterations is \lceil \log_2 n \rceil + 1:

  • \lceil \log_2 n \rceil iterations to get to a one-element sub-array with left=right
  • One additional iteration to break the loop

We do an O(1) work in each iteration by comparing x to the middle elements. So, the algorithm is logarithmic: \boldsymbol{O(\log n)}.

4.1. Master Theorem

We can use the Master Theorem to find the complexity. Let T(n) be the number of comparisons we perform in the worst case if the input array has a elements.

Since we halve the active part and do at most two comparisons in each iteration, the recurrence is:

    \[T(n) = T(n/2) + 2\]

The theorem uses the generic form:

    \[T(n) = a T(n/b) + f(n)\]

and compares f(n) to n^{\log_b a}. In our case, a=1 and b=2, so n^{\log_b a}=n^{\log_2 1} = n^0 = 1, and f(n)=2 \in \Theta(1).

Therefore, it holds that:

    \[2 = f(n) \in \Theta(1) =  \Theta \left(n^{\log_2 1} \right)\]

From the Master Theorem, we get the following:

    \[T(n) \in \Theta(n^{\log_b a} \log n)=\Theta(n^0 \log n) = \Theta(\log n)\]

4.2. Space Complexity

The binary search uses only a constant number of helper variables (left, middle, right, n). As a result, its iterative version (which we presented) has an O(1) auxiliary space complexity.

However, the recursive version would use a O(\log n) stack memory and would be prone to the stack overflow error if n is large.

Since we need O(n) memory for an array with n elements, the total space complexity is O(n).

5. Conclusion

In this article, we explained the complexity of binary search. The algorithm runs in O(\log n) time but works only for sorted arrays.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.