Wikipedia defines optimization as a problem where you maximize or minimize a real function by systematically choosing input values from an allowed set and computing the value of the function. That means when we talk about optimization we are always interested in finding the best solution. So, let say that one has some functional form(e.g in the form of f(x)) and he is trying to find the best solution for this functional form. Now, what does best mean? One could either say he is interested in minimizing this functional form or maximizing this functional form.
Generally, an optimization problem has three components.
subject to a < x < b
where, f(x) : Objective function
x : Decision variable
a < x < b : Constraint
Depending on the number of decision variables optimization may be categorized into two parts,
- Uni-variate optimization problems: Uni-variate optimization may be defined as a non-linear optimization with no constraint and there is only one decision variable in this optimization that we are trying to find a value for.
x ∈ R
- Multivariate optimization problems: In multivariate optimization problem there must be more than one decision variable in this optimization that we are trying to find a value for.
min f(x1, x2, x3…..xn)
Below is a table of differences between Uni-variate Optimization and Multivariate Optimization:
|Uni-variate Optimization||Multivariate Optimization|
|Uni-variate optimization may be defined as a non-linear optimization with no constraint and there is only one decision variable in this optimization that we are trying to find a value for.|
So, when you look at this optimization problem you typically write it in this above form where you say you are going to minimize f(x), and this function is called the objective function. And the variable that you can use to minimize this function which is called the decision variable is written below like this w.r.t x here and you also say x is continuous that is it could take any value in the real number line.
|In a multivariate optimization problem, multiple variables act as decision variables in the optimization problem.|
z = f(x1, x2, x3…..xn)
So, when you look at these types of problems a general function z could be some non-linear function of decision variables x1, x2, x3 to xn. So, there are n variables that one could manipulate or choose to optimize this function z.
|In case of uni-variate optimization problem there is only one decision variable.||In case of multivariate optimization problem there is more than one decision variables.|
|In a uni-variate optimization problem x is a scalar variable and not a vector variable.||In a multivariate optimization problem x may be a scalar variable or a vector variable.|
|One could explain univariate optimization using pictures in two dimensions that is because in the x-direction we had the decision variable value and in the y-direction, we had the value of the function.||However, if it is multivariate optimization then we have to use pictures in three dimensions|
|In a uni-variate optimization problem there is no constraint.||In a multivariate optimization problem there may be no constraint case or equality constraint case or inequality constraint case.|
|In case of uni-variate optimization the first order necessary conditions for x to be the minimizer of the function f(x) is f'(x) = 0||In case of unconstrained multivariate optimization the first order necessary conditions for x̄* to be the minimizer of the function f(x̄) is ∇ f(x̄*) = 0|
|In case of uni-variate optimization the second-order sufficiency condition for x to be the minimizer of the function f(x) is f”(x) > 0||In case of unconstrained multivariate optimization the second-order sufficiency condition for x̄* to be the minimizer of the function f(x̄) is|
∇ 2 f(x̄*) > 0