1. Overview

Dijkstra’s algorithm is one of the most famous algorithms for finding the shortest path. One of the main parts of Dijkstra’s algorithm is the priority queue used to traverse the nodes in a graph, and this also is the main difference between Dijkstra’s algorithm and BFS.

In this tutorial, we’ll consider three versions of this algorithm. One will use a simple priority queue that supports only enqueuing and dequeuing a minimal node. Another approach will check an existing node in a queue before enqueuing a new one, and the last approach will support decreasing the value of the key inside a help.

2. Dijkstra’s Algorithm

First, let’s refresh our memory with the idea behind Dijkstra’s algorithm. In a general implementation, Dijkstra’s algorithm returns the collection of the shortest path to all of the nodes from a source node:

Rendered by QuickLaTeX.com

However, often we need to identify only the shortest path from a source node to an end node. In this case, we can consider the following version, which also allows us to exit the algorithm earlier, based on the condition on line 10:

Rendered by QuickLaTeX.com

The CreatePath function on line 24 recreates the path based on the information about the weights and previous nodes. For simplicity, it’s not presented in the current algorithm, and our intro to Dijkstra explains it. All the examples in this article will be built upon this version of Dijkstra’s algorithm, as finding the path between two nodes is the most common application.

3. Naive Implementation

This implementation of the classic Dijkstra algorithm has a significant flaw. However, it’s easy to make this mistake, especially when implementing the algorithm from scratch:

Rendered by QuickLaTeX.com


Although this algorithm will return us the shortest path from our source node to the destination node, it would do this suboptimally. In this approach, we constantly insert new nodes into the queue instead of updating the weights of the nodes already there. As a result, the number of elements in the queue might, in the worst case, grow to the number of edges in a graph. The increased queue size worsens the time complexity of enqueuing and dequeuing.

For this case, the time complexity is O(log(E)) for both enqueue and dequeue operations. This approach would have the total time complexity as \mathbf{O(E*log(E))}. In most cases, we can see this implementation with the priority queues that store duplicate nodes and don’t allow easy access to the elements inside by their key.

4. Improving the Time Complexity

To deal with the problem explained in a previous paragraph, we need to ensure that the queue wouldn’t contain the same nodes with different weights, and there are a couple of ways to deal with this.

4.1. Eager Removal

Removing and inserting helps us mitigate the issue created by a previous implementation and would not allow having the same nodes in the queue. Instead of cleaning up the queue lazily, we can remove it when we insert a new node:

Rendered by QuickLaTeX.com

This approach can dramatically improve the complexity of the algorithms making the time complexity for enqueue and dequeue operations \mathbf{O(log(V))}. Unfortunately, this implementation will require a more complex queue. The standard implementations of sorted queues have linear time complexity of accessing an element. Thus, to use this approach, we should ensure that element retrieval would not affect the performance.

4.2. Decrease-Key Implementation

Decreasing key weight on the nodes already in the queue is more straightforward and clear. In other words, if we found a new shorter path to a particular node, we would update this information in the out queue:

Rendered by QuickLaTeX.com

If our heap allows us to decrease a specific key, this approach would have the same time complexity as the previous one in combination with a binary heap. This implementation ensures that the queue size cannot rise above the number of nodes.

4.3. Improved Decrease-Key Implementation

Technically, the decrease-key implementation with special heap structures like Fibonacci Heaps can dramatically improve the algorithm’s performance. The implementation of the algorithm won’t change and will be the same as in the previous section, but the underlying implementation of the heap itself will be different. However, it has a greater constant factor than a Binary Heap. It also requires a more complex data structure. Additionally, the time complexity for this approach is amortized, which means that it’s average between several operations.

5. Complexity Analysis

Regarding these approaches, we can check the table for their complexities. This table won’t contain a naive approach. Here is the time complexity for operations in the following implementation:

Rendered by QuickLaTeX.com

As we can see, the implementations with Decrease-Key didn’t provide significant improvement in the case of Binary and Binomial heaps. However, suppose we’re using a different priority queue implementation like Fibonacci Heap. In that case, we can significantly decrease time complexity. Although, we need to consider that the time complexity used for the Fibonacci heap is amortized and distributed between operations.

6. Conclusion

In this article, we learned that a priority queue implementation that supports the Decrease-Key operation would improve the algorithm’s complexity in specific heaps, like the Fibonacci heap. However, most languages use binary heap as the primary implementation for this data structure. Thus, the simplest way to approach this problem and to avoid the increase in the complexity for dense graphs is to use queue implementations which allow checking of existing nodes and simulate updating by removing and inserting them with a new weight.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.