1. Introduction

In this tutorial, we’ll discuss two concepts in machine learning and deep learning. These are training and validation loss.

We’ll begin by defining these two concepts. Then, we’ll review where and how they are used. Lastly, we’ll review the impact or implications of these in deep learning.

2. Definitions

To start, let’s define some basic concepts.

Deep learning is a branch of machine learning that comprises the use of artificial neural networks. In particular, deep learning algorithms allow computer programs to learn and discover patterns in massive amounts of data.

Artificial neural networks are algorithms that are inspired by the workings of the biological neural network in living organisms. An artificial neural network is usually composed of a system of interconnected nodes and weights. As such, the input signals are first passed through nodes known as neurons. Next, these neurons are activated with a function and multiplied with weights to produce an output signal.

Consequently, when we employ deep learning algorithms on a specific dataset, we yield a model that can take in some input and produce an output. Now, to assess the performance of a model, we use a measure known as loss. Specifically, this loss quantifies the error produced by the model.

A high loss value usually means the model is producing erroneous output, while a low loss value indicates that there are fewer errors in the model. In addition, the loss is usually calculated using a cost function, which measures the error in different ways. The cost function chosen is usually dependent on the problem being solved and the data being used. For example, cross-entropy is usually used for binary classification problems.

3. Training Loss

The training loss is a metric used to assess how a deep learning model fits the training data. That is to say, it assesses the error of the model on the training set. Note that, the training set is a portion of a dataset used to initially train the model. Computationally, the training loss is calculated by taking the sum of errors for each example in the training set.

It is also important to note that the training loss is measured after each batch. This is usually visualized by plotting a curve of the training loss.

4. Validation Loss

On the contrary, validation loss is a metric used to assess the performance of a deep learning model on the validation set. The validation set is a portion of the dataset set aside to validate the performance of the model. The validation loss is similar to the training loss and is calculated from a sum of the errors for each example in the validation set.

Additionally, the validation loss is measured after each epoch. This informs us as to whether the model needs further tuning or adjustments or not. To do this, we usually plot a learning curve for the validation loss

5. Implications of Training and Validation Loss

In most deep learning projects, the training and validation loss is usually visualized together on a graph. The purpose of this is to diagnose the model’s performance and identify which aspects need tuning. To explain this section, we’ll use three different scenarios.

5.1. Underfitting

Let’s consider scenario 1, the image illustrates that the training loss and validation loss are both high:

img 6210aec4713b7

At times, the validation loss is greater than the training loss. This may indicate that the model is underfitting. Underfitting occurs when the model is unable to accurately model the training data, and hence generates large errors.

Furthermore, the results in scenario 1 indicate that further training is needed to reduce the loss incurred during training. Alternatively, we can also increase the training data either by obtaining more samples or augmenting the data.

5.2. Overfitting

In scenario 2, the validation loss is greater than the training loss, as seen in the image:

img 6210aec599a86

This usually indicates that the model is overfitting, and cannot generalize on new data. In particular, the model performs well on training data but poorly on the new data in the validation set. At a point, the validation loss decreases but starts to increase again.

A notable reason for this occurrence is that the model may be too complex for the data or that, the model was trained for a long period. In this case, training can be halted when the loss is low and stable, this is usually known as early stopping. Early stopping is one of the many approaches used to prevent overfitting.

5.3. Good Fit

In scenario 3, in the image below, the training loss and validation loss both decrease and stabilize at a specific point:

img 6210aec6c9d02

This indicates an optimal fit, i.e a model that does not overfit or underfit.

6. Conclusions

In this tutorial, we reviewed some basic concepts in deep learning. Next, we discussed training loss and validation loss and how they are used.

Finally, we reviewed three different scenarios with both losses and their implications on the models being built.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.