Recall of a Classification Model
Recall, also known as the True Positive Rate (TPR), is defined as True Positives / (True Positives + False Negatives), i.e., what % of actual outcomes (labels) are correctly recognized.
Recall is important in tasks that required as many possible positives to be identified.
Recall is especially important in medicinal and legal applications (ex: fraud detection and tumor detection), and it is occasionally paired with someone who manually filters out all of the False Positives. Recall is also related to F1 Score and ROC Curves.
0
1
Contributors are:
Who are from:
Tags
Data Science
Related
Recall of a Classification Model
Precision of a Classification Model
What evaluation metric would you want to maximize based on the following scenario?
Precision of a Classification Model
Recall of a Classification Model
Confusion Matrix
ROC Curve and ROC AUC
Precision and Recall performance metrics.
F1 Score
Optimizing Criteria in Classification Problems
Satisficing Criteria in Classification Problems
Bayes error rate
What evaluation metric would you want to maximize based on the following scenario?
Recall of a Classification Model
Precision of a Classification Model
Sensitivity Analysis of a Classification Model
Learning Curve of a Classification Model
Having three evaluation metrics makes it harder for you to quickly choose between two different algorithms, and will slow down the speed with which your team can iterate. True/False?
If you had the four following models, which one would you choose based on the following accuracy, runtime, and memory size criteria?
Coverage
How to choose between precision and recall?
F-Measure