1. Overview

For any problem, there can be multiple solutions. Although, researchers’ goal is to find a solution that takes less time to execute and consumes less memory.

In Computer Science, solutions are translated to programs. Therefore, choosing the best solution depends on how many resources it needs to be executed. Resources on a computer are the amount of memory space to be used and the execution time. Both have to be minimal.

Time and space complexity are two important indicators of an optimal algorithm.

In this tutorial, we’ll define time and space complexity. We’ll discuss different types of time and space complexity, followed by an example for each.

Finally, we’ll conclude this tutorial by highlighting the core difference between them.

2. What Is Time Complexity?

Time complexity is the computational complexity describing the amount of time required for the execution of an algorithm. Time complexity measures the time taken by every statement of the algorithm. Hence, it highly depends on the size of processed data. Additionally, it helps to define the effectiveness of an algorithm and to evaluate its performance.

Because time complexity is an asymptotic function calculated from the size of input data, it takes as notation the mathematical symbols of Landau: \mathcal{O}, \Omega, and \Theta. Here, each symbol defines different time complexity.

\mathcal{O} notation represents an upper bound for the time needed, describing the worst-case scenario. The notation \Omega gives a lower bound for the time needed. It describes the best-case scenario. \Theta notation frames the time needed by an upper bound and a lower bound. It narrates the average-case scenario.

There are different types of time complexity, depending on the time spent by each algorithm till it reaches the end of its execution. Therefore, the type of time complexity depends on the instructions or statements in a program. Few examples are: constant time (\mathcal{O} (n) = 1), linear time (\mathcal{O} (n) = n), logarithmic time (\mathcal{O} (n) = \log n), etc.

3. Methods for Calculating Time Complexity

To calculate time complexity, we need to take into consideration each line of the program. Thus, we’re taking an example of the factorial function. Let’s calculate the time complexity of the factorial function:

1. fact ← 1 
2. i ← 2 
3. while i ≤ n do 
4.     fact ← fact ∗ i 
5.     i ← i + 1 
6. end while

Let T(n) be the function of the time complexity of this algorithm. The time complexity of lines 1 and 2 would be \mathcal{O}(1). Line number 3 denotes a loop. Hence, we need to repeat lines 4 and 5, (n -1) times. Therefore, the time complexity of lines 4 and 5 would be \mathcal{O}(n).

Finally, if we add the time complexity of all the lines, we’ll get the overall time complexity of the factorial function \mathbf{T(n) = \mathcal{O}(n)}.

This method is called the iterative method because it calculates the time complexity of an iterative algorithm by parsing it line by line and adding the complexity.

Besides the iterative method, several other concepts are used for different cases. For example, the recursive method is a good way to calculate time complexity for recurrent solutions, which uses recursive trees or substitutions. Another popular method to calculate time complexity is the master’s theorem.

4. What Is Space Complexity?

When an algorithm is executed on a computer, it necessarily requires a specific amount of memory space. Space complexity represents the amount of memory one program uses in order to achieve its execution. Because a program needs memory to store input data and temporal values while being executed, space complexity is auxiliary and input space.

Just like time complexity, it also helps evaluate a solution. It’s represented using the same Landau’s notations as we presented before in the case of time complexity.

Similar to time complexity, there are different types of space complexity, depending on the memory consumed by each algorithm. An example of an algorithm with a constant space complexity is selection sort since it operates on the same array without any other memory space.

Merge sort is an example of an algorithm with linear space complexity. It needs to create many arrays consisting of parts of the original array. Therefore, the bigger the array is, the more memory space it needs.

5. Methods for Calculating Space Complexity

In this section, we’ll discuss how to calculate the space complexity with an example. Here, we’re taking an example of computing the sum of elements of an array:

1. int sum, i 
2. while i ≤ n do 
3.     sum ← sum + a[i] 
4.     i ← i + 1 
5. end while 
6. return sum

Let S(n) be the space complexity of the algorithm. In most systems, an integer takes a space of 4 bytes in memory. Therefore, space complexity would be the number of allocated bytes.

In line 1, memory space is allocated for two integers, which means S(n) = 4 \ bytes \times 2 = 8 bytes. Line 2 denotes a loop. In lines 3 and 4, we assign value to an existent variable. Hence, there is no need to allocate space. In line 6, the return statement will allocate one more memory case. Hence, S(n)= 4 \times 2 + 4 = 12 bytes.

Since the array is allocating n cases of integers in the algorithm, the final space complexity will be: \mathbf{S(n) = n + 12 = \mathcal{O} (n)}.

6. Time Complexity vs. Space Complexity

Now we know the basics of time and space complexity and how it can be calculated for an algorithm or program. In this section, we’ll summarizes all the previous discussions and enumerate the core differences in a table:

Rendered by QuickLaTeX.com

7. Conclusion

Undoubtedly, both time and space complexity are two important parameters for evaluating a solution. Nevertheless, with the current evolution in hardware technologies, space complexity is no longer essential because almost all machines have enough memory. However, the time complexity is still a crucial way to evaluate algorithms.

In this tutorial, we discussed the theory behind time and space complexity. Moreover, we demonstrated the steps of how to compute time and space complexity with an example for each case.

Finally, we enumerated the main differences between these two concepts in a table.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.