In this tutorial, we’ll explain the difference between the cost, loss, and objective functions in machine learning. However, we should note that there’s no consensus on the exact definitions and that the three terms are often used as synonyms.
2. Loss Functions
The loss function quantifies how much a model ‘s prediction deviates from the ground truth for one particular object . So, when we calculate loss, we do it for a single object in the training or test sets.
There are many different loss functions we can choose from, and each has its advantages and shortcomings. In general, any distance metric defined over the space of target values can act as a loss function.
2.1. Example: the Square and Absolute Losses in Regression
Very often, we use the square(d) error as the loss function in regression problems:
For instance, let’s say that our model predicts a flat’s price (in thousands of dollars) based on the number of rooms, area (), floor, and the neighborhood in the city ( or ). Let’s suppose that its prediction for is USD k. If the actual selling price is USD k, then the square loss is:
Another loss function we often use for regression is the absolute loss:
In our example with apartment prices, its value will be:
Choosing the loss function isn’t an easy task. Through cost, loss plays a critical role in fitting a model.
3. Cost Functions
The term cost is often used as synonymous with loss. However, some authors make a clear difference between the two. For them, the cost function measures the model’s error on a group of objects, whereas the loss function deals with a single data instance.
So, if is our loss function, then we calculate the cost function by aggregating the loss over the training, validation, or test data . For example, we can compute the cost as the mean loss:
But, nothing stops us from using the median, the summary statistic less sensitive to outliers:
The cost functions serve two purposes. First, its value for the test data estimates our model’s performance on unseen objects. That allows us to compare different models and choose the best. Second, we use it to train our models.
3.1. Example: Cost as the Average Square Loss
Let’s say that we have the data on four flats and that our model predicted the sale prices as follows:
We can calculate the cost, i.e., the total loss of over the data, as the mean square loss for individual flats:
3.2. Other Examples of Cost
However, the cost isn’t in the same units as and . Instead of thousands of dollars, the numerical value of the cost denotes millions of squared dollars. That’s a problem for interpretation since the square of a currency doesn’t make sense in the real world. We can address it by taking the square root of the mean square loss:
This particular cost function is known as Root-Mean-Square Error (RMSE). We usually interpret it as the expected deviation of predictions from the ground truth. So, in our example, we conclude that the predicted flat prices are off by USD 43,860 on average. Using the mean absolute loss we’d get the total cost of:
That is USD 27,450 per flat on average. Similarly, the root of the median square loss yields the cost of , i.e., approximately twelve thousand dollars.
As we see, just as there are many ways to define a loss for a single object, there are multiple ways to combine the losses over a set of instances.
3.3. How to Remember the Difference Between the Loss and Cost?
Many newcomers to the field (and the experts alike) complain that the difference between the loss and cost is artificial and that they are often confused about which one is for what. A mnemonic trick is to remember that loss starts the same as lonely. So, the loss is for a single, lonely data instance, while the cost is for the set of objects.
4. Objective Functions
While training a model, we minimize the cost (loss) over the training data. However, its low value isn’t the only thing we should care about. The generalization capability is even more important since the model that works well only for the training data is useless in practice.
So, to avoid overfitting, we add a regularization term that penalizes the model’s complexity. That way, we get a new function to minimize during training:
In general, the objective function is the one we optimize, i.e., whose value we want to either minimize or maximize. The cost function, that is, the loss over a whole set of data, is not necessarily the one we’ll minimize, although it can be. For instance, we can fit a model without regularization, in which case the objective function is the cost function.
4.1. Example: the Loss, Cost, and the Objective Function in Linear Regression
Let’s say we are training a linear regression model:
We’ll assume the data are -dimensional, and we prepend a dummy zero value to all the instances to simplify the expression.
Averaging the square loss over the training data , we get:
That’s our cost function, or, as we can also call it, the loss over the data . However, if we want to prevent from overfitting , we can add a regularization term (whose parameter ‘s value we can determine empirically):
Therefore, the objective function we’ll minimize during training is the sum of the cost and the regularization penalty. Usually, we divide it by 2 to make the calculation of derivatives easier:
Let’s see how this works in our flat price prediction example.
4.2. Calculation Step 1: Get Predictions
Let’s suppose that we coded neighborhood as 1 and as 0. There are five parameters in our regression model:
where . Let’s say that the training data is the same as earlier:
If , a training algorithm would first get the model’s predictions:
4.3. Calculation Step 2: Compute the Objective Function
Then, it would calculate the cost. In Section 3.1., we computed the mean square loss of for the same predictions. So, the only thing remaining is the regularization term. If we use as the regularization parameter, the term is:
So, the value of the objective function is:
Since the objective function combines the cost and the regularization penalty, its value isn’t easy to interpret.
In this article, we explained the meanings of the loss, cost, and objective functions. While some researchers and practitioners use the terms interchangeably, others differentiate between them.