1. Overview

Searching is a fundamental problem in computer science. As a result, there are many algorithms for searching to find an optimal solution to a given problem. Hill Climbing and Best First Search (BeFS) are two of the well-known search algorithms. Although they’re similar in some aspects, they have their differences as well.

In this tutorial, we’ll first describe the Hill Climbing and Best First Search (BeFS) algorithms and compare their characteristics.

Hill climbing is a simple heuristic search algorithm. To find the global optimum, we randomly start from a point and look at the neighboring points. If we find a point that is better than the current one, we move in its direction. Then, we do the same for the new point until we reach a point where there’s no better one in its vicinity. Here’s an example of hill climbing with Java source code.

We can also express the process in pseudocode:

Rendered by QuickLaTeX.com

Best First Search (BeFS), not to be confused with Breadth-First Search (BFS), includes a large family of algorithms. For instance, A* and B* belong to this category. It is an algorithm that combines the best of BFS and Depth First Search (DFS). While BFS and DFS traverse a graph without knowing path cost, BeFS uses an evaluation (heuristic) function that indicates the distance of the current state from the goal state.

Then, at each step, BeFS expands the node with the lowest heuristic instead of blindly traversing only the depth or breadth of the search space. In addition, it keeps two lists for closed and open nodes. At each step, it sorts the list of open nodes and chooses the optimum one according to the heuristic function. 

Here’s the pseudocode for the best first search algorithm:

Rendered by QuickLaTeX.com

The two algorithms have a lot in common, so their advantages and disadvantages are somewhat similar. For instance, neither is guaranteed to find the optimal solution.

For hill climbing, this happens by getting stuck in the local optima. One way to bypass them is to run the algorithm in parallel with different initializations. This way, we can increase our chances of finding the “hill” that can lead us to the global optimum. For BeFS, there’s a possibility of getting stuck in a loop when visiting a node whose successor is its parent node. This can be avoidable by a good choice of a heuristic function.

In addition, hill climbing is prone to having plateaus and bridges. Plateaus are the states where the surrounding points are equal to the current state. This case can be confusing for the algorithm since it won’t be able to find the next move. Choosing it randomly might put us in a path the leads nowhere. On the other hand, ridges are local optima whose gradients are not zero. This can make it hard for the algorithm to move on the ridges since it can’t move in all directions. Therefore, the algorithm has to choose a zigzag-shaped path to get to the goal, which will make it slow.

On the positive side, hill climbing needs very little memory since, at each step, it has to store the goal state and the current state. On the contrary, in BeFS, in order to choose the best next state, we need to store all the open and closed nodes in two queues. Therefore, it’ll require more storage.

Finally, while hill climbing chooses the first neighbor that it finds to be better than the current state, BeFS checks more neighbors and compares them with the heuristic function. This makes it possible to choose the best one among several states.

We can summarize the pros and cons of the two algorithms as follows:

Rendered by QuickLaTeX.com

^\bigstar Convex function

5. Conclusion

In this tutorial, we learned about two well-known search algorithms, Hill Climbing and Best First Search, and compared some of the features that make them distinct from each other.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.