Learn Before
Common Performance Metrics for Classification
- Confusion Matrix
- False-Positive rate (Type-I error)
- False-Negative rate (Type-II error)
- Accuracy
- Recall
- F1 Score
- Optimizing vs. Satisficing
- Specificity (True Negative rate Specificity)
- Negative Predictive Value
- False Discovery Rate
- True-Positive rate (Recall, Sensitivity)
- Positive-Predictive Value (Precision)
- ROC curve and ROC AUC
0
3
Contributors are:
Who are from:
Tags
Data Science
Related
Example of a classification problem in statistical learning
Methods of Classification
Evaluation Metrics of Classification Models
Classification Evaluation Metrics
What is the differnce between classification and regression?
3 Types of Classification Tasks in Machine Learning
Imbalanced Classification vs. Balanced Classification
Common Performance Metrics for Classification
Classification with Missing Data
A data science team is building a predictive model for an e-commerce company. Which of the following tasks represents a classification problem?
Predictive Model Task Analysis
Learn After
Confusion Matrix
ROC Curve and ROC AUC
Precision and Recall performance metrics.
F1 Score
Optimizing Criteria in Classification Problems
Satisficing Criteria in Classification Problems
Bayes error rate
What evaluation metric would you want to maximize based on the following scenario?
Recall of a Classification Model
Precision of a Classification Model
Sensitivity Analysis of a Classification Model
Learning Curve of a Classification Model
Having three evaluation metrics makes it harder for you to quickly choose between two different algorithms, and will slow down the speed with which your team can iterate. True/False?
If you had the four following models, which one would you choose based on the following accuracy, runtime, and memory size criteria?
Coverage
How to choose between precision and recall?
F-Measure