If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community, have a look at our **Contribution Guidelines**.

# Logarithmic Time Complexity

Last modified: December 9, 2020

## 1. Overview

In this tutorial, we’re going to dive into the use of logarithmic time complexity in computer science.

**More precisely, we’ll discuss what logarithms mean and how to use them when applied to the calculation of the time complexity of algorithms.**

## 2. Logarithms in Computer Science

### 2.1. What Is a Logarithm?

**Logarithms are the mathematical inverse of exponentials.**

Let’s look at an example of an exponential:

This can be described as what is the number that we get when multiplying 2 by itself 3 times:

Now let’s look at its inverse operation, the logarithm:

This can be imagined as how many times we divide the number 8 by 2 to get to 1:

The same way we can have an exponential function with different bases (base 2 in our example), we can also have logarithms with other bases. A base-3, for instance, would be imagined as how many times can we divide some number by 3 before reaching 1.

However,** for the purpose of this tutorial, we’ll stick with the logarithms of base-2, as they are the most common. **Let’s dig into that claim next.

### 2.2. Why Base-2?

Which brings us to the question: Why base-2? **Really, it’s just that it comes up so frequently.**

For example, they often come up when designing algorithms:

- When we need to repeatedly divide an array in half – this is an operation used, for instance, in some sorting, like Merge Sort, or searching algorithms, like Binary Search; in this scenario, the number of times we can divide an array of size
*n*in half is*log2(n)* - When doing bit operations – for instance, writing a number in binary uses about
*log2(n)*bits

**Because of this, if we classify something as O(log n) we’ll typically mean **

*Using Big-O notation, though, this simplification doesn’t cause a problem since all logarithms are asymptotically equivalent.*

**O(log2 n).**## 3. Review of Time Complexity

**When analyzing the time complexity of an algorithm, the question we have to ask is what’s the relationship between its number of operations and the size of the input as it grows**.

Very commonly, we’ll use Big-O notation to compare the time complexity of different algorithms.

## 4. O*(log n)* Time Complexity

**4.1. Binary Search Example**

Let’s look at the use of logarithms in the calculation of the time complexity of algorithms. **Specifically, we’ll use the** **Binary Search algorithm and its logarithmic time complexity – O(log n)**.

Binary Search is an algorithm that is used to search for an element in an ordered set.

It works by initially checking the value present in the center of the set. If the value of the element it is looking for is lower than the one found, it repeats the process on the left half of the set. If it’s bigger, checks the right half. This process is repeated until the element is found.

Let’s see it in action. We start with the following sorted array, searching for “2”:

**We start by picking the middle of the array (if the size is even we can round down)**:

The number we’re searching is lower than the value encoutered, which is 21. **So, we repeat the process for the left half of the array**:

Again, the number we’re searching for is lower than the value encountered. **We continue repeating the process**:

**until we** **find the desired value**.

**4.2. Calculating Binary Search Complexity**

Let’s think about the relationship between the number of operations and the size of the input as it grows.

The algorithm, for each iteration, divides the input in half. So, it starts with a size of 12, then 5 (the second iteration didn’t count with the value already found), then 2.

Let’s work backward, just so we can visualize what would happen with larger arrays:

- The last iteration, when the input was of size 2 took 1 operation to find the result;
- The second iteration, when the input was of size 5, took 2 operations;
- The first iteration, the original array, when the size was 12, took 3 operations.

Schematically, it’s something like this:

**In each iteration, we can see that the relation between the input and the number of operations is a logarithm:**

**In conclusion, as the input n grows, the time complexity is O(log n).** This is a textbook case of

*O(log n)*.

Though there are other logarithms represented in time complexity, *O(log n) *is, by far, the one we’ll see the most.

## 5. Conclusion

*O(log n)*.