Logistic regression loss function vs. cost function
- Loss function computes the error for a single example;
- Cost function computes the average of errors for the entire training set.
0
2
Tags
Data Science
Related
Logistic regression gradient descent
Logistic regression loss function vs. cost function
Logistic Regression Cost Function
A machine learning model is trained for a binary classification task where the goal is to predict a label
y(either 0 or 1). The model's prediction,ŷ, is a probability between 0 and 1. The performance on a single example is measured using the loss function:L(ŷ, y) = -(y*log(ŷ) + (1 - y)*log(1 - ŷ)).Consider two scenarios for an example where the true label
yis 1:- Scenario A: The model predicts
ŷ = 0.9. - Scenario B: The model predicts
ŷ = 0.1.
Which scenario results in a higher loss value, and why?
- Scenario A: The model predicts
When training a logistic regression model for binary classification, the standard approach is to use the logarithmic loss function:
L(ŷ, y) = -(y*log(ŷ) + (1 - y)*log(1 - ŷ)). An alternative could be the squared error loss:L(ŷ, y) = (ŷ - y)². What is the primary reason the logarithmic loss is preferred for this task?Calculating Loss for a Single Prediction
Logistic regression loss function vs. cost function
(Batch) Gradient Descent (Deep Learning Optimization Algorithm)
True or False: The cost function for logistic regression trained with m≥1 examples is always greater than or equal to zero.