If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community, have a look at our **Contribution Guidelines**.

# Understanding Space Complexity

Last modified: August 2, 2021

## 1. Introduction

**Space complexity measures the total amount of memory that an algorithm or operation needs to run according to its input size.**

In this tutorial, we’ll see different ways to quantify space complexity. Moreover, we’ll analyze the total space taken via some examples.

Lastly, we’ll discuss how space and time complexity impact each other.

## 2. Notations

There are different notations we can use to express space complexity measurements. The most widely used is big-O notation, and that’ll be our main focus. Also, we’ll briefly define other common notations.

### 2.1. Big-O Notation

Big-O notation describes an asymptotic upper bound. It represents the algorithm’s scalability and performance.

Simply put, **it gives the worst-case scenario of an algorithm’s growth rate. **We can say that: “the amount of space this algorithm takes will grow no more quickly than this *f(x)*, but it could grow more slowly.”

Let’s see a few examples of expressing space complexity using big-O notation, starting from slowest space growth (best) to fastest (worst):

*O(1)*– constant complexity – takes the same amount of space regardless of the input size*O(log n)*– logarithmic complexity – takes space proportional to the*log*of the input size*O(n)*– linear complexity – takes space directly proportional to the input size*O(n log n)*– log-linear/quasilinear complexity – also called “linearithmic”, its space complexity grows proportionally to the input size and a logarithmic factor*O(n^2)*– square/polynomial complexity – space complexity grows proportionally to the square of the input size

### 2.2. Omega Notation – Ω

Omega notation expresses an asymptotic lower bound.

So, **it gives the best-case scenario of an algorithm’s complexity**, opposite to big-O notation. We can say that: “the amount of space this algorithm takes will grow no more slowly than this *fix), *but it could grow more quickly.”

Let’s analyze a simple example to illustrate why we prefer big-O notation over omega notation. Using Ω, we could say that the richest person on Earth owns at least $10. It is a true sentence but obviously far from being precise.

### 2.3. Theta Notation *–* *θ*

Theta notation represents a function that is within lower and upper bounds.

We can say that: “the algorithm’s space takes at least that (lower bound function) amount of space and no more than that (maximum bound function) amount of space”.

## 3. Analyzing Space Complexity of Algorithms

The ability to calculate space complexity is essential in considering an algorithm’s efficiency. In this section, we’ll analyze the space complexity of a few programs of differing difficulty. We’ll measure it using big-O notation.

Above all, it’s necessary to mention that **space complexity depends on a variety of things such as the programming language, the compiler, or even the machine running the algorithm**.

We’ll use Java for our examples, although, each of them could be easily applied to any technology. To apply them to other programming languages, we can simply replace the Java data types with language-specific ones. Then in our calculations, we’d use the sizes that correspond to the language-specific data types.

For example, to port the below Java example JavaScript, we’d replace Java’s *int* with JavaScript’s *number*. And, since Java’s *int *occupies four bytes and JavaScript’s *number* occupies eight bytes, we’d adjust the data sizes used in our analysis accordingly.

To get warmed up, let’s consider a simple Java operation that sums two integers (numbers without a fractional part):

```
public int sum(int a, int b) {
return a + b;
}
```

In this particular method, three variables are used and allocated in memory:

- the first
*int*argument,*a* - the second
*int*argument,*b* - and the returned sum result which is also an
*int*like*a*and*b*

In Java, a single integer variable occupies four bytes of memory. In this example, we have three integer variables. Therefore, this algorithm always takes 12 bytes of memory to complete (3*4 bytes).

We can clearly see that the space complexity is constant, so, it can be expressed in big-O notation as *O(1)*.

Next, let’s determine the space complexity of a program that sums all integer elements in an array:

```
public int sumArray(int[] array) {
int size = array.length;
int sum = 0;
for (int iterator = 0; iterator < size; iterator++) {
sum += array[iterator];
}
return sum;
}
```

Again, let’s list all variables present in the above code:

*array*– the function’s only argument – the space taken by the array is equal 4*n*bytes where*n*is the length of the array*size*– a 4-byte integer*sum*– a 4-byte integer*iterator*– a 4-byte integer

The total space needed for this algorithm to complete is 4*n* + 4 + 4 + 4 (bytes). The highest order of *n* in this equation is just *n*. Thus, the space complexity of that operation is *O(n)*.

## 4. Space-Time Complexity Tradeoffs

All efforts made by analyzing time and space complexity lead to the algorithm’s efficiency.

But, when we can say that an algorithm is efficient? The answer seems to be obvious: it should be fast, and it should take the least amount of memory possible.

Unfortunately, in algorithmics, space and time are like two separate poles. **Increasing speed will most often lead to increased memory consumption and vice-versa**.

On the one side, we have merge sort, which is extremely fast but requires a lot of memory. On the other side, we have bubble sort, a slow algorithm but one that occupies minimal space. There are also some balanced ones like in-place heap sort. Its speed and space usage are not the best, but they’re acceptable.

### 4.1. Practical Example

A great example of optimizing the time complexity of the algorithm at the expense of memory is memoization. It’s a technique used to reduce time complexity of algorithms that frequently call some method with the same input data. Instead of calling the method every time, which might be time-consuming, we store the results, and on each call, we check if there is already a cached result for a given input.

Let’s see that in action. Below, we can see an algorithm that calculates a Fibonacci number:

```
public int fibo(int n) {
System.out.println("Calling fibonacci with input: " + n);
if (n < 2) {
return n;
}
return fibo(n - 1) + fibo(n - 2);
}
```

It’s a recursive solution, and its time complexity is *O(2^n)*. After running it with input *n* = 6, we can see the console output:

```
Calling fibonacci with input: 6
Calling fibonacci with input: 5
Calling fibonacci with input: 4
Calling fibonacci with input: 3
Calling fibonacci with input: 2
Calling fibonacci with input: 2
Calling fibonacci with input: 3
Calling fibonacci with input: 2
Calling fibonacci with input: 4
Calling fibonacci with input: 3
Calling fibonacci with input: 2
Calling fibonacci with input: 2
```

As we can see, the method is called with the same input a few times. Let’s optimize it using memoization:

```
private Map<Integer, Integer> cacheTable = new HashMap<>();
public int fibo(int n) {
if (n < 2) {
return n;
}
if (this.cacheTable.containsKey(n)) {
return this.cacheTable.get(n);
}
int fiboNumber = fibo(n - 1) + fibo(n - 2);
this.cacheTable.put(n, fiboNumber);
return fiboNumber;
}
```

In the above algorithm, we’re calling the method only once for each input number. Thus, we reduced complexity to *O(n)*. At the same time, we increased memory used by the program with the addition of a *HashMap* that is storing previously-calculated pairs of input *n* and the corresponding result.

### 4.2. Which One to Choose?

Considering the algorithm’s efficiency, we should determine in what environment it will run and what are the requirements.

The algorithm’s speed cannot cause a program’s failure. The speed will rely on the computing power of the machine on which it’s executed. In a worst-case scenario, low computing power will force the algorithm to slow down.

On the other hand, if an algorithm requires more memory than the machine provides, it won’t complete. Most often, it will cause some type of memory-related error.

To sum up, we can’t aim for the best time and space efficiency altogether. Most often, we should sacrifice one of them according to our requirements. At times, we can find a happy medium.

## 5. Conclusion

In this article, we defined what the space complexity means. Moreover, we described common notations used to express it. Then, we determined the space complexity of some given algorithms written in Java.

Finally, we discussed the space-time complexity tradeoffs. Now we know that **maximizing both the algorithm’s space and time complexity is impossible**. We should adjust those parameters according to our requirements and environment.