1. Introduction

Constrained optimization, also known as constraint optimization, is the process of optimizing an objective function with respect to a set of decision variables while imposing constraints on those variables.

In this tutorial, we’ll provide a brief introduction to constrained optimization, explore some examples, and introduce some methods to solve constrained optimization problems.

2. Constrained Optimization

In a constrained optimization problem, the objective function is either a cost function (to be minimized) or a reward/utility function (to be maximized).

Now, let’s examine situations where a function is constrained. A constraint is a hard limit we place on a variable’s value to prevent the objective function from going forever in certain directions:

Illustration of constraint

2.1. Optimum’s Location

With nonlinear functions, the optima (maximum or minimum values) can either occur at the boundaries or between them:

Optimum's location in nonlinear cases

As we can see, the maximum or minimum values can occur in the interior or at boundaries.

In contrast, with linear functions, the maximum or minimum values occur only at boundaries:

Optimum's location in linear case

3. Forms of Constrained Optimization Problems

The following formulation describes the general (canonical) form of a constrained minimization problem:

    \[\begin{array}{rll} \mathrm{min}\, & f(\mathbf{x})\\ \mathrm{s.t.}\, & g_{i}(\mathbf{x})=c_{i} & \text{for }i=1,\ldots,n\, \text{(equality constraints)}\\ & h_{j}(\mathbf{x})\geq d_{j} & \text{for }j=1,\ldots,m\,  \text{(inequality constraints)} \end{array}\]

where \mathbf{x} is the vector of decision variables; f(\mathbf{x}) is the objective function that needs to be optimized; g_{i}(\mathbf{x})=c_{i} for i=1,\ldots,n and h_{j}(\mathbf{x})\geq d_{j} for j=1,\ldots,m are constraints that are required to be satisfied.

If we need to maximize the objective function, e.g., the utilization efficiency of network resources, we can negate it to make it fit the general form:

    \[\begin{array}{rl} \mathrm{max}\, & f(\mathbf{x})\\ \mathrm{s.t.}\, & \mathrm{constraints} \end{array}\quad\equiv\quad\begin{array}{rl} \mathrm{min}\, & -f(\mathbf{x})\\ \mathrm{s.t.}\, & \mathrm{constraints} \end{array}\]

3.1. Types of Constrained Optimization Problems

Depending on the objective function and constraints, there are several types of constrained optimization problems.

Linear Programming (LP) covers the cases in which the objective function is linear and all constraints are also linear. If all decision variables are integers, then we’ll have an Integer Linear Programming (ILP) problem. If only some of the decision variables are integers (the remaining are continuous), that’s a Mixed Integer Linear Programming (MILP) problem.

For example:

    \[\begin{array}{rl} \mathrm{min}\, & x_{1}+2x_{2}+3x_{3}\\ \mathrm{s.t.}\, & 3x_{1}+x_{2}\geq6\\ & 7x_{1}-2x_{2}\geq1\\ & x_{1},x_{2}\in\mathbb{R}_{+} \end{array}\]

Nonlinear Programming (NLP) refers to the problems where the objective function or some constraints are nonlinear. Similar to linear programming, there are Integer Nonlinear Programming (INLP) and Mixed Integer Nonlinear Programming (MINLP).

Here’s an example of a problem with nonlinear constraints:

    \[\begin{array}{rl} \mathrm{min}\, & x_{1}+2x_{2}+3x_{3}\\ \mathrm{s.t.}\, & 3x_{1}x_{2}\geq 6\\ & 7x_{1}-2x_{2}\geq 1\\ & x_{1}, x_{2} \in \mathbb{R}_{+} \end{array}\]

Finally, Quadratic Programming (QP) problems are those with linear constraints but the objective function is quadratic. Similar to linear programming and nonlinear programming problems, we also have Integer Quadratic Programming (IQP) and Mixed Integer Quadratic Programming (MIQP) problems.

For instance:

    \[\begin{array}{rl} \mathrm{min}\, & x_{1}+2x_{1}x_{2}+3x_{2}x_{3}\\ \mathrm{s.t.}\, & 3x_{1}+x_{2}\geq6\\ & 7x_{1}-2x_{2}\geq1\\ & x_{1},x_{2}\in\mathbb{R}_{+} \end{array}\]

4. Real-World Example

Now let’s see how we can apply constrained optimization to real-world problems. Let’s suppose we want to maximize the area of a rectangle while keeping its perimeter equal to a predefined value p:

Simple constrained optimization example

As we see, the area is A=xy, and that will be our objective function. Further, since the perimeter needs to be equal to p, we have a constrained optimization problem:

    \[\begin{array}{rl} \mathrm{max}\, & A=xy\\ \mathrm{s.t.}\, & 2(x+y)=p \end{array}\]

This problem has a quadratic objective function (xy) and a linear constraint, so it falls into the QP category.

5. Solving Constrained Problems

Let’s now see how we can solve constrained optimization problems.

5.1. Mathematical Methods

There is, unfortunately, no universal method for solving all types of optimization problems.

Different proposed methods are tailored to particular types of optimization problems. The following table lists some classical mathematical methods for solving constrained optimization problems:

Rendered by QuickLaTeX.com

Most of these methods, such as the branch-and-bound algorithm, are exact. Exact optimization techniques are guaranteed to find the optimal solution. The price for such guarantees is that they can take a lot of time to complete. In contrast, inexact methods don’t guarantee finding the optimum but usually produce a sufficiently good solution quickly. Metaheuristics such as the Grey-Wolf Optimization algorithm fall into the category of inexact methods and are frequently used when we need to find any solution quickly.

5.2. The Substitution Method

Let’s explain the substitution method using a simple example with an equality constraint:

    \[\begin{array}{rl} \mathrm{max}\, & f(x,y)=xy\\ \mathrm{s.t.}\, & x+y=2 \end{array}\]

As we see, that’s the same problem as in the above example with a rectangle. The only difference is that here, p isn’t a free parameter but a known value 4:

    \[x + y = 2 \iff 2(x + y) = 4\]

Since we have a simple equality constraint, we can use the substitution method. We use this method mostly for problems with an equality constraint from which it’s easy to express one variable in terms of others. If we’re lucky, we can substitute the variable in the objective function and get a form that isn’t hard to solve by hand.

In our example, the constraint gives us a simple substitution:

    \[y=2-x\]

As a result, we can rewrite the objective function:

    \[f(x,y)=f(x)=x(2-x)=2x-x^{2}\]

The maximum value of f(x) should satisfy the first-order derivative condition:

    \[f'(x)=0\iff 2-2x=0\iff x=1\]

Consequently, y=2-x=1, and the maximum value of f(x,y) is 1.

5.3. Optimization Solvers

The following table summarizes some well-known software tools for constrained optimization problems:

Rendered by QuickLaTeX.com

They all perform built-in exact methods (e.g., simplex) and usually combine them with inexact algorithms to reach a solution faster. For instance, CPLEX uses a node heuristic along with the branch-and-cut algorithm.

6. Conclusion

In this article, we covered the principles of constrained optimization. There are several types of constrained optimization problems and many methods for solving them. Which one is the best depends on the particular problem at hand.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.