In the case of Linear Regression, the Cost function is –
But for Logistic Regression,
It will result in a non-convex cost function. But this results in cost function with local optima’s which is a very big problem for Gradient Descent to compute the global optima.
So, for Logistic Regression the cost function is
If y = 1
Cost = 0 if y = 1, hθ(x) = 1
hθ(x) -> 0
Cost -> Infinity
If y = 0
To fit parameter θ, J(θ) has to be minimized and for that Gradient Descent is required.
Gradient Descent – Looks similar to that of Linear Regression but the difference lies in the hypothesis hθ(x)
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level Course