Learn Before
Calculating Loss for a Single Prediction
A binary classification model is used to predict a label y (which can be 0 or 1). For a single training example, the true label is y = 0, and the model predicts a probability ŷ = 0.2. Using the loss function L(ŷ, y) = -(y*log(ŷ) + (1 - y)*log(1 - ŷ)), calculate the loss for this example. Use the natural logarithm (ln) and round your answer to two decimal places.
0
1
Tags
Data Science
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Logistic regression gradient descent
Logistic regression loss function vs. cost function
Logistic Regression Cost Function
A machine learning model is trained for a binary classification task where the goal is to predict a label
y(either 0 or 1). The model's prediction,ŷ, is a probability between 0 and 1. The performance on a single example is measured using the loss function:L(ŷ, y) = -(y*log(ŷ) + (1 - y)*log(1 - ŷ)).Consider two scenarios for an example where the true label
yis 1:- Scenario A: The model predicts
ŷ = 0.9. - Scenario B: The model predicts
ŷ = 0.1.
Which scenario results in a higher loss value, and why?
- Scenario A: The model predicts
When training a logistic regression model for binary classification, the standard approach is to use the logarithmic loss function:
L(ŷ, y) = -(y*log(ŷ) + (1 - y)*log(1 - ŷ)). An alternative could be the squared error loss:L(ŷ, y) = (ŷ - y)². What is the primary reason the logarithmic loss is preferred for this task?Calculating Loss for a Single Prediction