1. Introduction

In this tutorial, we’ll present the Akra-Bazzi method. It’s a theorem we use to determine the complexity class of a divide-and-conquer algorithm.

2. The Complexity of Divide-and-Conquer Algorithms

Divide-and-conquer is an algorithm design technique that solves a problem by breaking it down into smaller sub-problems and combining their solutions. Each sub-problem is treated the same way: we divide it into smaller parts, solve them recursively, and unify the sub-solutions.

2.1. The Master Theorem

If the sub-problems are of the same size, we use the Master Theorem. Those are the ones where T(n), the number of steps an algorithm makes while solving a problem of size n, is a recurrence of the form:

    \[T(n) = aT\left(\frac{n}{b}\right) + g(n)\]

where g(n) is a real-valued function. More specifically, the recurrence describes the complexity of an algorithm that divides a problem of size n into a sub-problems each of size n/b. It performs g(n) steps to break down the problem and combine the solutions to its sub-problems.

3. The Akra-Bazzi Method

We can determine many algorithms’ complexities using the Master Theorem. For example, we can use it to show that Merge Sort has the log-linear worst-case complexity. Unfortunately, the Master Theorem isn’t always applicable.

Let’s say that the recurrence is:

    \[T(n) = T\left(\frac{n}{2}\right) + T\left(\frac{n}{4}\right) + 2n\]

That is, we perform 2n steps to divide a problem of size n into sub-problems of sizes \frac{n}{2} and \frac{n}{4} and combine their solutions. Since the sub-problems are uneven, we can’t use the Master Theorem.

3.1. The Akra-Bazzi Theorem

Instead, we use the more general Akra-Bazzi Theorem. It’s applicable to recurrences of the form:

(1)   \begin{equation*} T(n) = \begin{cases} \Theta(1),& n\leq n_0 \\ g(n) + \sum_{i=1}^{k}a_i T\left( b_i n + h_i(n) \right),& n > n_0 \end{cases} \end{equation*}

where T : R \rightarrow R+ is bounded, and the following holds:

  1. n_0 \in R is large enough so that T is well-defined
  2. For each i=1,2,\ldots,k:
    1. the constant a_i >0
    2. the constant b_i \in (0, 1)
    3. |h_i(n)| \in O\left(\frac{n}{\log^2 n}\right)
  3. k \geq 1 is a constant
  4. g(n) is a non-negative polynomial-growth function

We say that g(n) satisfies the polynomial-growth condition if there exist c_1,c_2 >0 such that for all n \geq 1, u \in [b_in, n], and i=1,2,\ldots,k we have c_1 g(n) \leq g(u) \leq c_2 g(n). This condition ensures that \boldsymbol{g} doesn’t change too fast. It holds if \boldsymbol{|g'(n)|} is bounded by a polynomial.

If conditions 1-4 are satisfied and p is the unique real solution to the equation:

(2)   \begin{equation*}  \sum_{i=1}^{k}a_ib_i^p = 1 \end{equation*}

then the Akra-Bazzi Theorem guarantees that:

(3)   \begin{equation*}  T(n) \in \Theta\left(n^p \left(1 + \int_{1}^{n}\frac{g(u)}{u^{p+1}}du \right) \right) \end{equation*}

We should note that it requires an additional technical condition to hold. The integral in Equation (3) should be finite:

    \[\int_{1}^{n}\frac{g(u)}{u^{p+1}}du < \infty\]

3.2. The Akra-Bazzi Theorem in the Context of Analyzing Algorithms

The theorem holds for real numbers and general recurrences. But, in complexity analysis, we’ll deal with recurrent functions of a particular kind.

First of all, \boldsymbol{T} will apply to positive integers only, so n \in \{1,2,\ldots\} in our applications. That’s because the input size n is an integer.

The value of g(n) represents the number of steps the algorithm does to divide the input of size n into smaller sub-problems and combine their solutions after solving them recursively, plus possibly some extra work on processing the input. Passing it to an algorithm’s implementation counts as 1 step at least, so \boldsymbol{g} will be strictly positive. For the same reasons, \boldsymbol{T} will always be positive.

3.3. Floor and Ceiling

Since n is integer in analysis of algorithms, we should rather use \lfloor\frac{n}{b_i}\rfloor or \lceil \frac{n}{b_i} \rceil instead of \frac{n}{b_i}. Luckily, we can ignore the ceiling and floor functions because they don’t affect asymptotic growth. That’s what the h_i functions are for. To work around a_i T\left(\lceil \frac{n}{b_i} \rceil\right), we set h_i(n)=\lceil \frac{n}{b_i} \rceil - \frac{n}{b_i} and rewrite the term as:

    \[a_i T\left(\frac{n}{b_i} + \left\lceil \frac{n}{b_i} \right\rceil - \frac{n}{b_i}\right)\]

That’s why we’ll usually formulate our recurrences without the h_is in practice.  We’ll get the correct results nevertheless but work with a simpler formula:

    \[T(n) = g(n) + \sum_{i=1}^{k}a_i T\left( \frac{n}{b_i} \right) \quad (n>n_0)\]

3.4. Boundary Conditions

If we take a closer look at the theorem’s result, we’ll see that n_0 doesn’t appear there. That may seem somewhat counter-intuitive at first. After all, if we double all the values in the base cases,  we expect the final value of T to double for every n. However, scaling the base cases only multiplies the function by a constant, and the multiplicative constants don’t affect the asymptotic behavior.  In turn, this implies that the asymptotic behavior of an algorithm is independent of its base cases.

We also say that n_0 should be large enough so that T is well defined. To see why, let’s consider T(n) = T\left(\lceil \frac{4}{5} n \rceil\right) + 1. If n_0=3, then we solve T(4) recursively:

    \[T(4) = T\left(\left\lceil \frac{4}{5}\cdot 4 \right\rceil \right) + 1 = T(\lceil 3.2 \rceil) + 1 = T(4) + 1\]

which is a contradiction. So, \boldsymbol{n_0} must be large enough to accommodate all such cases.

3.5. On Calculating \boldsymbol{p}

Finally, p from Equation (2) may not be easy to calculate precisely and by hand. If that’s the case, we can approximate its value numerically. For example, we can use Newton’s method.

However we compute or approximate p, it always exists and is unique. The conditions on a_i and b_i ensure that the function p \mapsto \sum_{i=1}^{k}a_i b_i^p is continuous with the range [0, \infty). By the intermediate value theorem, \sum_{i=1}^{k}a_i b_i^p=1 will always have a solution, and since the function is also strictly monotone, that solution will be unique.

3.6. The Akra-Bazzi Theorem for Polynomial \boldsymbol{g}

In some cases, we can derive the complexity directly without evaluating the definite integral. The method is a corollary of the Akra-Bazzi Theorem:

If we can bound \boldsymbol{g(n)}, then

(4)   \begin{equation*}   \begin{cases}     g(n) \in O(n^q) \implies T(n) \in \Theta(n^p),& q < p\\     g(n) \in \Theta(n^q) \implies T(n) \in \Theta(n^p \log n),& q = p \\     g(n) \in \Theta(n^q) \implies T(n) \in \Theta(n^q),& q > p   \end{cases} \end{equation*}

Luckily for us, the corollary applies to many algorithms.  Most of the time, g(n) will be of the form:

(5)   \begin{equation*}  g(n) = \sum_{\ell=0}^{q}c_{\ell}n^{\ell}\quad (\text{each } c_{\ell} \in R \text{ and } c_q > 0) \end{equation*}

where c_q, the highest-order coefficient, will be strictly positive. To see why c_q > 0, let’s imagine that c_q is negative. In that case, g(n) will become negative for a large enough n, which is impossible, as it would mean we make a negative number of steps.

3.7. The \boldsymbol{O} and \Omega Versions

In general, if g(n) \in L(f(n)), where L \in \{\Omega, \Theta, O\}, we’ll have:

    \[T(n) \in L\left(n^p \left(1 + \int_{1}^{n}\frac{f(u)}{u^{p+1}}du \right) \right)\]

The O and \Omega cases are of particular interest to us. If we can’t determine g exactly but can bound it asymptotically from below (g(n) \in \Omega(f(n))) or above (g(n) \in \Omega(f(n))), we can get the lower and upper bounds for T.

4. Example

Let’s work out an example:

    \[T(n) = T\left(\frac{n}{2}\right) + T\left(\frac{n}{4}\right) + 2n\]

We see that the recurrence satisfies all the conditions of the Akra-Bazzi Theorem. So, the first step is to find p.

4.1. Finding \boldsymbol{p}

Since a_1=a_2=1, b_1=\frac{1}{2}, and b_2=\frac{1}{4}, we should solve:

    \[\left(\frac{1}{2}\right) ^p+ \left(\frac{1}{4}\right)^p = 1\]

Since \frac{1}{4} = \left(\frac{1}{2}\right)^2, the equation becomes:

    \[\left(\frac{1}{2}\right)^p +\left(\left(\frac{1}{2}\right)^p\right)^2 = 1\]

or:

    \[x^2 + x - 1 = 0 \quad \text{where } x=\left(\frac{1}{2}\right)^p\]

There are two possible values for x: x_1=\frac{1+\sqrt{5}}{2} and x_2=\frac{1-\sqrt{5}}{2}. We can discard the latter since it’s negative and \left(\frac{1}{2}\right)^p must be positive. So, we get p by solving:

    \[\left(\frac{1}{2}\right)^p = \frac{1+\sqrt{5}}{2}\]

From there, we get:

    \[p = \log_{\frac{1}{2}} \frac{1+\sqrt{5}}{2} \approx -0.694\]

4.2. Applying the Akra-Bazzi Theorem

First, let’s calculate the integral:

    \[\begin{aligned} \int_{1}^{n}\frac{2u}{u^{-0.694+1}}du &= 2\int_{1}^{n}u^{0.694}du \\ &=\left. \frac{2}{1.694}u^{1.694} \right\rvert_{1}^{n} \\ &\approx 1.181 n^{1.694} - 1.181 \end{aligned}\]

Then, we get:

    \[T(n) \in \Theta\left( n^{-0.694}\left(1+1.181 n^{1.694}-1.181\right)\right) = \Theta\left(1.181 n - 0.181n^{-0.694}\right) = \Theta(n)\]

4.3. Shortcut

Would we get the same result if we used the shortcut for a polynomial g? Since g(n)=2n, its degree is q=1. As p=-0.694<1, we conclude that T(n) \in \Theta(n), which is the correct answer.

5. Conclusion

In this article, we presented the Akra-Bazzi method. We use it to determine the complexity divide-and-conquer algorithms for which the Master Theorem fails.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.