Wikipedia defines optimization as a problem where you maximize or minimize a real function by systematically choosing input values from an allowed set and computing the value of the function. That means when we talk about optimization we are always interested in finding the best solution. So, let say that one has some functional form(e.g in the form of f(x)) and he is trying to find the best solution for this functional form. Now, what does best mean? One could either say he is interested in minimizing this functional form or maximizing this functional form.

Generally, an optimization problem has three components.

**minimize f(x), ****w.r.t x, ****subject to a < x < b **

**where, f(x) : Objective function ****x : Decision variable ****a < x < b : Constraint **

**What’s a multivariate optimization problem?**

In a multivariate optimization problem, there are multiple variables that act as decision variables in the optimization problem.

#### z = f(x_{1}, x_{2}, x_{3}…..x_{n})

So, when you look at these types of problems a general function z could be some non-linear function of decision variables x_{1}, x_{2}, x_{3} to x_{n}. So, there are n variables that one could manipulate or choose to optimize this function z. Notice that one could explain univariate optimization using pictures in two dimensions that is because in the x-direction we had the decision variable value and in the y-direction, we had the value of the function. However, if it is multivariate optimization then we have to use pictures in three dimensions and if the decision variables are more than 2 then it is difficult to visualize.

**What’s multivariate optimization with equality constarint**:

In mathematics, equality is a relationship between two quantities or, more generally two mathematical expressions, asserting that the quantities have the same value, or that the expressions represent the same mathematical object. So if there is given an objective function with more than one decision variable and having an equality constarint then this is known as so.

**Example**:

##### min 2x_{1}^{2} + 4x_{2}^{2}

st

3x_{1} + 2x_{2} = 12

Here x_{1} and x_{2} are two decision variable with equality constraint 3x_{1} + 2x_{2} = 12

**Condition for identifying the optimum point in case of equality constraint**

If there is one equality constraint case then the condition is

-∇ f(x^{*}) = λ^{*}∇ h(x^{*})If there are more than one equality constraint case then the condition is

-∇ f(x^{*}) = Σ_{i=1}^{l}[∇ h_{i}(x^{*})] λ_{i}^{*}where,

f(x^{*}) = f(x_{1}, x_{2}, …., x_{n}) = Objective function

h(x^{*}) = h(x_{1}, x_{2}, …., x_{n}) = Equality constraint

λ^{*}∈ R

Let us quickly solve a numerical example on this to understand these conditions better.

**Numerical Example:**

Problem: subject to Solution: Here, Objective function(f(x)) = and Equality constraint(h(x)) = For identifying the optimum point we can write the equation as Hence Similarly, According to the condition This can be written as , And we have already the equality constraint equation By solving these three equations we can get our optimum solution along with the value of the variable λ . Hence and is our optimum solution.

**Why constraints are important in the optimization problem from a data science viewpoint?**

We look at optimization from a data science viewpoint because we are trying to minimize errors. In many cases when we try to solve data science problems and when we minimize error we said we could use some kind of **gradient-based algorithm** that we called as the **learning algorithm** to solve the problem. In some cases, while we are trying to minimize our error or the objective function, we might know some information about the problem that we want to incorporate in the solution. So, if for example, you are trying to uncover relationships between several variables and you do not know how many relationships are there, but you know for sure that certain relationships exist and you know what those relationships are, then when you try to solve the data science problem you would try to constrain your problem to be such that the known relationships are satisfied. So, that could pose an optimization problem where you have constraints in particular equality constraints and there are several other cases where you might have to look at the constraint version of the problem while one solves data science problems. So, it is important to understand how these problems are solved. Rather than equality constraint problems, inequality constraint problems are more relevant, for example, the algorithms for inequality constraints are very useful in data science algorithm that is called **support vector machines** and so on.

Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.

## Recommended Posts:

- Uni-variate Optimization vs Multivariate Optimization
- Multivariate Optimization and its Types - Data Science
- Multivariate Optimization - KKT Conditions
- Multivariate Optimization - Gradient and Hessian
- Unconstrained Multivariate Optimization
- Univariate, Bivariate and Multivariate data and its analysis
- Equality and Inference symbols in LaTeX
- Optimization techniques for Gradient Descent
- ADAM (Adaptive Moment Estimation) Optimization | ML
- Introduction to Ant Colony Optimization
- Optimization for Data Science
- Local and Global Optimum in Uni-variate Optimization
- Uni-variate Optimization - Data Science
- Hyperparameters Optimization methods - ML

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.