Learn Before
Avoiding Harms
It is important to avoid harms that may result from classifiers. Representational harms: harms caused by a system that demeans a social group. For example, African American names are more likely to be assigned negative emotion in the sentiment analysis. Censorship harms: like in toxicity detection, false-positive errors could lead to the censoring of discourse about certain groups, like gay people and blind people. A model card for NLP can help to clear and find the harms: A model card includes the following information: • training algorithms and parameters • training data sources, motivation, and preprocessing • evaluation data sources, motivation, and preprocessing • intended use and users • model performance across different demographic or other groups and environmental situations
0
1
Tags
Data Science
Related
Bayes Theorem in Machine Learning
Bayes Formula
Article: Multiple Naïve Bayes Classifiers Ensemble for Traffic Incident Detection
Bayes Error Rate for (Naive) Bayes Classifier
Pros and Cons of Naive Bayes Classifier
Training the naive Bayes Classifier
Naïve Bayes as a Language Model
Statistical Significance Tests for Naive Bayes Classifier
Avoiding Harms
Naive Bayes for other text classification tasks
Types of Naive Bayes Classifiers