1. Overview

In this article, we’ll talk about the field of optimization. First, we’ll make an introduction to optimization. Then, we’ll present three basic terms of optimization that are the objective function, the decision variables, and the constraints.

2. Introduction to Optimization

The term optimization means making something optimal. The field of mathematical optimization is an extremely wide area of applied mathematics since it generally consists of any method that aims to optimize a function under some constraints.

Its applications vary in fields like engineering, transportation, finance, marketing, production, etc. In any field, an optimization problem consists of three basic components that we will explore one by one in the following sections.

3. Objective Function

The objective function is simply the value that we are trying to optimize. It is usually expressed by a function f(x).

For example, the objective function may correspond to the profit of an investment that we are trying to maximize or the consumption of a type of energy that we want to minimize.

4. Decision Variables

The variables of the objective function that the optimizer can modify correspond to the decision variables of the optimization problem. These variables are also called design variables or manipulated variables.

5. Constraints

Finally, we put some constraints on the decision variables of the problem in order to control the range of each variable. They are defined in the form of equations denoted as g_n (x).

6. Example

Now, we’ll define an optimization problem in order to better illustrate these three terms. A simple example could be to maximize the area of a rectangle under a constraint on its two sides. This example can be formulated as an optimization problem in the below format:

\text{max} f(a, b) = a \ b \quad \text{when} \quad g_1 (a, b) = a + b = 10

where the variables a and b correspond to the sides of the rectangle.

In this problem:

  • f(a, b) is the objective function
  • the variables a and b are the decision variables
  • g_1(a, b) is the constraint

To solve this problem, we substitute one variable in the objective function using the constraint equality, and we end up with the maximization of a univariate function:

f(a, b) = a \ b = a \ (10 - a) = 10a - a^2

To maximize the above function, we compute the gradient with respect to the variable a:

\frac{\partial f}{\partial a} = 10 - 2a

Then, we compute its roots:

\frac{\partial f}{\partial a} = 0
10 - 2a = 0
a = 5

We observe that to maximize the area of a rectangle under this constraint, the rectangle should be a square.

7. Conclusion

In this article, we made an introduction to mathematical optimization. First, we talked about the term optimization. Then we discussed its three basic terms: objective function, decision variables, and constraints.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.