Learn Before
Gender Bias in LLMs from Data Imbalance
A prevalent form of bias in LLMs is gender bias, where models exhibit a preference for one gender. This often stems from class imbalances in the training corpus, such as the term 'nurse' being more frequently associated with women, which the model learns and reproduces.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Gender Bias in LLMs from Data Imbalance
Data Debiasing by Balancing Categories
Cultural Bias from English-Centric LLM Training Data
Mitigating Bias Through Data Diversity
A financial institution develops a language model to automate loan application approvals. The model is trained on the institution's loan approval data from the last 20 years. During testing, it is discovered that the model denies loans to applicants from certain low-income neighborhoods at a significantly higher rate than other applicants, even when their financial profiles (e.g., credit score, income) are identical. What is the most likely cause of this biased outcome?
Analyzing Bias in an AI-Powered Hiring Tool
Analyzing Potential Bias in a Scientific Summarization Model
You are the product owner for a customer-support L...
You are the risk lead for a company rolling out an...
You lead an internal review board deciding whether...
Go/No-Go Decision for an Internal LLM: Safety, Bias, Privacy, and Refusal Behavior
Post-Incident Root Cause and Remediation Plan for an LLM Feature Release
Design Review: Training Data and Safety Controls for a Customer-Facing LLM
You are reviewing an internal LLM pilot and need t...
Triage Plan for a Safety/Bias/Privacy Incident in a Customer-Facing LLM
Vendor LLM Procurement Decision: Balancing Safety, Bias, Privacy, and Refusal Alignment
Pre-Launch Risk Acceptance Memo for a Regulated-Industry LLM Assistant
Learn After
AI Recruitment Tool Anomaly
A company develops a large language model to assist with writing professional biographies. They notice that when prompted with the job title 'Surgeon', the model generates biographies using male pronouns and associates the character with stereotypically masculine traits. Conversely, when prompted with 'Administrative Assistant', it consistently uses female pronouns and stereotypically feminine traits. What is the most direct cause of this observed behavior?
Evaluating a Data Augmentation Strategy for Bias Mitigation