Learn Before
Assessing Fairness in an AI Hiring Tool
A company is developing a language model to summarize resumes for hiring managers. They are concerned about the model perpetuating societal biases and want to choose the most effective testing strategy to ensure fairness. Evaluate the two proposed methods below and determine which is better for identifying potential biases related to demographic characteristics.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Assessing Fairness in an AI Hiring Tool
An organization is developing a large language model to summarize news articles from various global sources for a diverse, international audience. Their primary ethical concern is that the model might unintentionally amplify stereotypes or misrepresent viewpoints from specific demographic or geopolitical groups. Which of the following evaluation strategies would be the most effective for identifying and quantifying this specific type of representational bias in the model's summaries?
Critique of a Chatbot Fairness Evaluation Plan