Learn Before
Concept

Avoiding Harms

It is important to avoid harms that may result from classifiers. \\ Representational harms: harms caused by a system that demeans a social group. For example, African American names are more likely to be assigned negative emotion in the sentiment analysis. \\ Censorship harms: like in toxicity detection, false-positive errors could lead to the censoring of discourse about certain groups, like gay people and blind people. \\ A model card for NLP can help to clear and find the harms: A model card includes the following information: • training algorithms and parameters • training data sources, motivation, and preprocessing • evaluation data sources, motivation, and preprocessing • intended use and users • model performance across different demographic or other groups and environmental situations

0

1

Updated 2021-09-26

Tags

Data Science