Authors Top

If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community, have a look at our Contribution Guidelines.

1. Introduction

In this tutorial, we’ll talk about the difference between pseudo-polynomial and polynomial complexities.

2. Complexity

The goal of time-complexity analysis is to derive asymptotic bound(s) of the number of steps an algorithm executes as its input grows in size. For instance, the O(n^2) time complexity means that if the input’s size n grows indefinitely, the algorithm will perform at most cn^2 steps, where c is a positive constant. Translating steps to time units, we’ll get that the duration is also a quadratic function of n.

That, however, brings up a question of the input size. What is an appropriate measure of the size of an input?

2.1. The Size Is the Number of Bits

In the algorithm analysis, the input size is the number of bits needed to write it down. For example, if the input is an array with n integers, the actual size for analysis is 32n or 64n, depending on how we store integer numbers at the machine level. Likewise, the size of an n \times m matrix is 32nm or 64nm.

2.2. We Can Count the Elements and Get Correct Results

In general, the following rule applies:

(1)   \begin{equation*} \text{the input size } =\sum_{\text{element }\in\text{ input} } \text{ the number of bits the element takes} \end{equation*}

If all the elements are of the same type, which is mostly the case, we have:

(2)   \begin{equation*} \text{the input size } =\text{the number of elements} \times \text{ the number of bits per element} \end{equation*}

As long as the number of bits per single element is a constant for the total number of input elements, we can ignore it in the analysis. Why? Because multiplicative constants don’t change the complexity.

Therefore, it’s safe to use n as the size of an n-element array, which we do in practice. The same goes for other input types such as, e.g., matrices.

3. The Pseudo-Polynomial Complexity

What if we express complexity in terms of the input’s magnitude instead of the number of bits?

Let’s say that instead of n, we use m = \max \{a_1, a_2, \ldots, a_n\}, where a_1, a_2, \ldots, a_n constitute the input to our algorithm. The technical details of the analysis won’t change. The same tools like the Master and Akra-Bazzi Theorems will apply. However, we may get a different result than if we used the standard definition of the input size. For that reason, we say that an algorithm whose complexity is \boldsymbol{O(M^p)} is pseudo-polynomial. Let’s take a look at an example.

3.1. A Pseudo-Polynomial Primality Problem

Here’s the pseudocode of a simple way to check if a number is prime:

Rendered by

3.2. Analysis

The algorithm executes at most m iterations. To simplify things, we’ll assume that our machine has an O(1) low-level method of computing x \bmod y. Then, we can conclude that the algorithm’s time complexity is O(m).

However, we need n=\log_2 m bits to store the input number m in memory. Since m=2^n, the complexity in terms of the size is O\left(2^n\right). So, is the algorithm linear or exponential?

Since the computers store data as binary strings (consisting of zeroes and ones), the latter result is more in line with how they carry out the computation. On the other hand, if we had computers that could store any number in a single memory cell, expressing the complexity in terms of the numeric values would be more appropriate. So, it all comes down to what is a more natural and more reasonable representation of the input.

4. Why Do We Care About Pseudo-Polynomial Complexity?

Since \log_2 m grows slowly with m, the exponential nature of numerical algorithms will materialize only if the input number m is sufficiently large. So, if we could bound its value or be certain that it will be reasonably low most of the time, we shouldn’t worry too much about time complexity.

But the converse also holds. If we want to make a numerical algorithm slow, the pseudo-polynomial complexity tells us to use exceptionally large numbers. That’s the case in, e.g., cryptography. For example, we couldn’t rely on RSA as the encryption algorithm if we could factor numbers efficiently in a polynomial time. So, the users of RSA encryption choose such prime factors p and q that factoring their product is beyond the current computational capabilities of any attacker.

5. Conclusion

In this article, we talked about pseudo-polynomial algorithms. They are polynomial in the input’s magnitude. However, as we illustrated with an example, such algorithms aren’t polynomial in the number of bits we need to store their inputs.

Authors Bottom

If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community, have a look at our Contribution Guidelines.

Comments are closed on this article!