Wikipedia defines optimization as a problem where you maximize or minimize a real function by systematically choosing input values from an allowed set and computing the value of the function. That means when we talk about optimization we are always interested in finding the best solution. So, let say that one has some functional form(e.g in the form of f(x)) and he is trying to find the best solution for this functional form. Now, what does best mean? One could either say he is interested in minimizing this functional form or maximizing this functional form.
Generally, an optimization problem has three components.
subject to a < x < b
where, f(x) : Objective function
x : Decision variable
a < x < b : Constraint
Depending on the number of decision variables optimization may be categorized into two parts,
- Uni-variate optimization problems: Uni-variate optimization may be defined as a non-linear optimization with no constraint and there is only one decision variable in this optimization that we are trying to find a value for.
x ∈ R
- Multivariate optimization problems: In multivariate optimization problem there must be more than one decision variable in this optimization that we are trying to find a value for.
min f(x1, x2, x3…..xn)
What’s a multivariate optimization problem?
In a multivariate optimization problem, there are multiple variables that act as decision variables in the optimization problem.
z = f(x1, x2, x3…..xn)
So, when you look at these types of problems a general function z could be some non-linear function of decision variables x1, x2, x3 to xn. So, there are n variables that one could manipulate or choose to optimize this function z. Notice that one could explain univariate optimization using pictures in two dimensions that is because in the x-direction we had the decision variable value and in the y-direction, we had the value of the function. However, if it is multivariate optimization then we have to use pictures in three dimensions and if the decision variables are more than 2 then it is difficult to visualize.
Types of Multivariate Optimization:
Depending on the constraints multivariate optimization may be categorized into three parts,
- Unconstrained multivariate optimization
- Multivariate optimization with equality constarint
- Multivariate optimization with inequality constarint
- Unconstrained multivariate optimization: As the name suggests multivariate optimization with no constraints is known as unconstrained multivariate optimization.
min x1 + 2x2 – x1x2 + 2x22
- Multivariate optimization with equality constarint: In mathematics, equality is a relationship between two quantities or, more generally two mathematical expressions, asserting that the quantities have the same value, or that the expressions represent the same mathematical object. So if there is given an objective function with more than one decision variable and having an equality constarint then this is known as so.
min 2x12 + 4x22
3x1 + 2x2 = 12
Here x1 and x2 are two decision variable with equality constraint 3x1 + 2x2 = 12
- Multivariate optimization with inequality constarint: In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. There are several different notations used to represent different kinds of inequalities. Among them <, >, ≤, ≥ are the popular notation to represent different kinds of inequalities. So if there is given an objective function with more than one decision variable and having an inequality constarint then this is known as so.
min 2x12 + 4x22
3x1 + 2x2 ≤ 12
Here x1 and x2 are two decision variable with inequality constraint 3x1 + 2x2 ≤ 12
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.