1. Introduction

In this tutorial, we’ll present the use of the Bellman equation in enforcing reinforcement learning.

2. Reinforcement Learning

In machine learning, we usually use traditional supervised and unsupervised methods to train the model. In addition, we can use an independent learning model called reinforcement learning. This technique allows designing a framework where we let the model understand the environment and derive the solution:

reinforcement learningMoreover, the model works by assigning rewards for each action it takes to learn about the environment to reach the goal. The agent tries to find a target output but sometimes finds an approximate output. The agent then mistakes it as the only path and marks it as the solution while backtracking. Since all the states in the path are marked with a value of 1 (positive reward), this will cause difficulty in reaching the target location.

In other words, the value function for such an environment may not always be suitable. Consequently, the Bellman operator was created to enhance the solution derivation using reinforcement learning.

3. Bellman Operator

The Bellman equation, named after Richards E. Bellman, sets the reward with hints about its next action.

The reinforcement agent will aim to proceed with the actions, producing maximum reward. Bellman’s method considers the current action’s reward and the predicted reward for future action, and it can be illustrated by the formulation:

V(S_{c})= max_{a}[R(S_{c},a) + \curlyvee V(S_{n})]

where, S_{c} and S_{n} indicate the current and next state, a represents action, V indicates the value of the state, R(S_{c}, a) is the reward of an action at a state and \curlyvee represents the factor limiting the value range to be (0,1).

3.1. Example

We can assume an example of a game environment where the player aims to reach a goal location where the player gets the highest reward. Let’s visualize the game environment:

example environment for bellman

The environment space has an agent from a source location aiming to reach a target location. Additionally, the agent gets three types of rewards such as positive (+1), neutral (0), and negative (-1). Further, the reward at the target location is 1. Furthermore, normal states have neutral rewards, and any undesirable states have negative rewards. However, the actions possible are right, left, up and down. Moreover, we take the value of \curlyvee as 0.9 so that the value of the next state makes a difference in deciding the value of the states.

The agent explores the environment and calculates the value of each state as per Bellman’s operator. For example, the target location is S_{15}, and the reward to reach it is 1. Let’s assume the agent moves in the path of covering states S_{1}, S_{2}, S_{3}, S_{8}, S_{13}, S_{14} and finally S_{15} without moving to the states that are closer to the negative reward states. Since the Bellman considers the value of the next state, we present the calculation in the reverse order:

Let’s take the value of state S_{15} to be 0 since it’s the final goal state. Now, we can calculate:

V(S_{14})= R(S_{14},right) + \curlyvee V(S_{15})= 1 + (0.9) (0)=1.

V(S_{13})= R(S_{13},right) + \curlyvee V(S_{14})= 0 + (0.9) (1)=0.9.

V(S_{8})= R(S_{8},down) + \curlyvee V(S_{13})= 0 + (0.9) (0.9)=0.81.

V(S_{3})= R(S_{3},down) + \curlyvee V(S_{8})= 0 + (0.9) (0.81)=0.0.73.

V(S_{2})= R(S_{2},right) + \curlyvee V(S_{3})= 0 + (0.9) (0.73)=0.66.

V(S_{1})= R(S_{1},right) + \curlyvee V(S_{3})= 0 + (0.9) (0.66)=0.59.

As a result, it gets a picture of the states that lead to the target state. Finally, the agent reaches the target location, regardless of its source location, by following states with increasing values.

4. Advantages and Disadvantages

Let’s examine some of the main advantages and disadvantages of using Bellman’s operator in reinforcement learning.

4.1. Advantages

Using other algorithms that find a path from source to solution, when found, we mark the value of all states as 1. However, this information can’t be used for future purposes. Starting from a different source location, the agent observes that the value of all the states in the path is 1. Moreover, this means there’s no difference between the goal and intermediary steps. Thus collapsing the setup.

By using Bellman’s formulation, even after an agent has found the path to the goal, we can obtain the mapping of the environment. For example, we can suitably use the Bellman operator for the single-source shortest path algorithm.

4.2. Disadvantages

Bellman works as an optimization equation where at least a basic knowledge of the environment is available, for example, in environments where we don’t know which states should be marked with negative reward and neutral. This makes it difficult for the value function to give incorrect results and fail the whole idea. As a result, it’s not suitable for complex and bigger environment spaces.

4. Conclusion

In this article, we presented a generic idea behind reinforcement learning and how the Bellman operator aids in achieving the learning technique. Additionally, we have discussed an example along with the merits and demerits of the Bellman operator.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.