Wikipedia defines optimization as a problem where you maximize or minimize a real function by systematically choosing input values from an allowed set and computing the value of the function. That means when we talk about optimization we are always interested in finding the best solution. So, let say that one has some functional form(e.g in the form of f(x)) and he is trying to find the best solution for this functional form. Now, what does best mean? One could either say he is interested in minimizing this functional form or maximizing this functional form.
Generally, an optimization problem has three components.
subject to a < x < b
where, f(x) : Objective function
x : Decision variable
a < x < b : Constraint
Depending on the types of constraints optimization may be categorized into two parts
- Constrained optimization problems: In cases where the constraint is given there and we have to have the solution satisfy these constraints we call them constrained optimization problems.
- Unconstrained optimization problems:In cases where the constraint is missing we call them unconstrained optimization problems.
What’s uni-variate optimization?
Uni-variate optimization is a simple case of a non-linear optimization problem with an unconstrained case that is there is no constraint. Uni-variate optimization may be defined as a non-linear optimization with no constraint and there is only one decision variable in this optimization that we are trying to find a value for.
x ∈ R
So, when you look at this optimization problem you typically write it in this above form where you say you are going to minimize f(x), and this function is called the objective function. And the variable that you can use to minimize this function which is called the decision variable is written below like this w.r.t x here and you also say x is continuous that is it could take any value in the real number line. And since this is a uni-variate optimization problem x is a scalar variable and not a vector variable.
The necessary and sufficient conditions for x to be the minimizer of the function f(x).
In the case of uni-variate optimization, the necessary and sufficient conditions for x to be the minimizer of the function f(x) are
- First-order necessary condition: f'(x) = 0
- Second-order sufficiency condition: f”(x) > 0
- Uni-variate Optimization vs Multivariate Optimization
- Optimization for Data Science
- Multivariate Optimization and its Types - Data Science
- Difference Between Data Science and Data Mining
- Optimization techniques for Gradient Descent
- ADAM (Adaptive Moment Estimation) Optimization | ML
- Introduction to Ant Colony Optimization
- Local and Global Optimum in Uni-variate Optimization
- Hyperparameters Optimization methods - ML
- Multivariate Optimization - KKT Conditions
- Multivariate Optimization - Gradient and Hessian
- Unconstrained Multivariate Optimization
- Multivariate Optimization with Equality Constraint
- Introduction to Data Science : Skills Required
- Overview of Data Science
- Data Science Methodology and Approach
- Data Science - Solving Linear Equations
- Data Science | Solving Linear Equations
- Effect of Google Quantum Supremacy on Data Science
- Introduction to Data Science
Let us quickly solve a numerical example on this to understand these conditions better.
min f(x) w.r.t x
Given f(x) = 3x4 – 4x3 – 12x2 + 3
According to the first-order necessary condition:
Now, we want to know among these 3 values of x which are actually minimizers. To do so we look at the second-order sufficiency condition. So according to the second-order sufficiency condition:
Putting each values of x in the above equation:
f”(x) | x = 0 = -24 (Don’t satisfy the sufficiency condition)
f”(x) | x = -1 = 36 > 0 (Satisfy the sufficiency condition)
f”(x) | x = 2 = 72 > 0 (Satisfy the sufficiency condition)
Hence -1 and 2 are the actual minimizer of f(x). So for these 2 values
f(x) | x = -1 = -2
f(x) | x = 2 = -29