In the case of Linear Regression, the Cost function is –
But for Logistic Regression,
It will result in a non-convex cost function. But this results in cost function with local optima’s which is a very big problem for Gradient Descent to compute the global optima.
So, for Logistic Regression the cost function is
If y = 1
Cost = 0 if y = 1, hθ(x) = 1
hθ(x) -> 0
Cost -> Infinity
If y = 0
To fit parameter θ, J(θ) has to be minimized and for that Gradient Descent is required.
Gradient Descent – Looks similar to that of Linear Regression but the difference lies in the hypothesis hθ(x)
- Understanding Logistic Regression
- ML | Logistic Regression using Python
- ML | Logistic Regression using Tensorflow
- ML | Why Logistic Regression in Classification ?
- ML | Logistic Regression v/s Decision Tree Classification
- Identifying handwritten digits using Logistic Regression in PyTorch
- ML | Kaggle Breast Cancer Wisconsin Diagnosis using Logistic Regression
- ML | Linear Regression
- ML | Classification vs Regression
- Simple Linear-Regression using R
- Types of Regression Techniques
- ML | R-squared in Regression Analysis
- Multiple Linear Regression using R
- Linear Regression Using Tensorflow
- Linear Regression using PyTorch
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.