1. Overview

When facing a problem, we can consider multiple approaches to solve it. One of the most asked questions is the difference between a greedy approach and dynamic programming.

In this tutorial, we’re going to explain the two concepts and provide a comparison between them.

2. Greedy Approach

2.1. Theoretical Idea

Solving a problem using a greedy approach means solving the problem step-by-step. On each step, the algorithm makes a choice, based on some heuristic, that achieves the most obvious and beneficial profit. The algorithm hopes to achieve an optimal solution, even though it’s not always achievable.

The greedy approach is suitable for problems where local optimality leads to an optimal global solution.

2.2. Example

Let’s take an example of a problem that can be solved using a greedy approach.

Say that we are given a set of activities. Each activity has a start and an end time. We’re asked to find the maximum number of activities that don’t intersect, or, in other words, that can be performed by a single person.

The greedy approach is to always choose the activity with the earliest ending time. Let’s prove the optimality of our heuristic.

Suppose we were to choose some activity that ends at a later time. In this case, we have limited our options for the next steps. That is to say, the activity with the earliest ending time offers more activities to choose from on the next steps.

Therefore, choosing the earliest ending time must give us a total optimal solution for this problem.

Let’s take a look at the implementation of the discussed algorithm:

algorithm MaxNonIntersectingActivitiesGreedy(activities):
    // INPUT
    //   activities = List of activities
    // OUTPUT
    //   result = The maximum number of non-intersecting activities
    sortOnEndTime(activities)
    currentEndTime <- negative infinity
    result <- 0
    for each activity in activities:
        if currentEndTime < activity.startTime:
            result <- result + 1
            currentEndTime <- activity.endTime
    return result

First, we sort the list of activities based on earlier ending time. Next, we iterate over the activities and choose the first one that can be taken. If the starting time of the activity is later than the end time of the last chosen one, we take this activity and update the current end time we have.

The total time complexity of the above algorithm is O(n log n), where n is the total number of activities. The reason for this complexity is the sort operation that can be implemented in O(n log n), while the iteration complexity is just O(n).

Dijkstra and Prim’s algorithms are also well-known examples of greedy problems.

2.3. Counter Example

Although the greedy approach may seem understandable and easy to implement, some problems can’t be solved using the greedy approach.

For example, the 0/1 knapsack problem can’t be solved with a greedy approach. In order to use the greedy approach when solving a problem, we must first prove that local optimality leads to global optimality.

3. Dynamic Programming

3.1. Theoretical Idea

Almost every problem can be solved using the recursive backtracking approach. However, this approach usually has exponential time complexity.

The reason for exponential time complexity may come from visiting the same state multiple times. We might end up calculating the same state more than once. In that case, using dynamic programming is usually a good idea.

Dynamic programming comes from the idea of storing the already calculated states. When calculating the answer for a state, first, we check to see if we have calculated the same state before. If we have already calculated this state, we can just return its answer. Otherwise, we must calculate the answer for this state and store it.

This approach is called top-down dynamic programming. However, there’s another approach called the bottom-up approach which mainly has the same idea.

The difference is that the bottom-up approach starts from small subproblems and tries to build up the answer for bigger subproblems. This process continues until we reach the solution to the full problem.

The main benefit of using dynamic programming is that we move to polynomial time complexity, instead of the exponential time complexity in the backtracking version. Also, dynamic programming, if implemented correctly, guarantees that we get an optimal solution.

The reason behind dynamic programming optimality is that it’s an optimization over the backtracking approach which explores all the possible choices.

3.2. Example

When it comes to dynamic programming, the 0/1 knapsack and the longest increasing subsequence problems are usually good places to start.

Anyway, let’s give a dynamic programming solution for the problem described earlier:

Rendered by QuickLaTeX.com

First, we sort the list of activities based on earlier starting time. The ith state stores the maximum number of activities that can be taken in the range [i, n].

The base subproblem is that once we reach the end of the list, we can’t take any activities. Next, we iterate over the activities in descending order. For each activity, we can either leave the current activity and move onto the next one. In this case, we take the answer of DP[i+1].

Otherwise, we can pick the current activity. This means we must find the next activity to move to, which is what the getNext function is calculating. The next activity is the one starting immediately after the current activity ends.

After that, the answer to the current state is the same as the next state incremented by one, because we picked the current activity. Finally, the answer to the ith state is the maximum between these two choices. The final answer will be stored in DP[0].

This algorithm has O(n^2) time complexity, because of the getNext function complexity. However, since the array is sorted, we can perform a binary search to get the next activity. Thus, the total time complexity reduces to O(n log n).

The reason for the optimality is that in each step we explore all the possible choices. After that, we choose the one with the maximum profit for the entire subproblem. Also, each step depends on the next states, which we already calculated using the same approach.

3.3. Counter Example

In cases where the recursive approach doesn’t make many calls to the same states, using dynamic programming might not be a good idea. For example, the traveling salesman problem can’t be solved using dynamic programming.

4. Comparison

Taking look at the table, we see the main differences and similarities between greedy approach vs dynamic programming.

Rendered by QuickLaTeX.com

In general, if we can solve the problem using a greedy approach, it’s usually the best choice to go with.

However, some problems may require a very complex greedy approach or are unsolvable using this approach. In this case, we try to optimize the recursive solution in order to implement a dynamic programming approach.

5. Conclusion

In this tutorial, we explained the main ideas behind the greedy approach and dynamic programming, with an example of each approach.

We also gave further examples of problems that can and can’t be solved using both approaches.

Finally, we summarized by presenting a basic comparison between the two approaches.

1 Comment
Oldest
Newest
Inline Feedbacks
View all comments
Comments are closed on this article!