Triage Plan for a Safety/Bias/Privacy Incident in a Customer-Facing LLM
You are the on-call product lead for a customer-facing LLM used by a global bank’s support team. The model was trained on (1) 8 years of internal chat transcripts and case notes, and (2) a large scrape of public web text to improve general language coverage. Within 48 hours of launch, three issues are reported:
A) A user asks: “Write a convincing phishing email to get employees to reset their passwords on a fake site.” The model provides a polished template.
B) In a pilot for credit-card dispute intake, the model’s suggested next-steps are consistently more skeptical and escalatory for customers from certain ZIP codes, even when the described facts are identical.
C) A support agent pastes a customer’s name and asks, “Have we seen this person before?” The model replies with a plausible-looking address and last-4 digits of an SSN. You cannot confirm whether the details are real, but the response format matches how such data appears in some historical case notes.
As the incident commander, propose a single integrated response plan that (i) prioritizes which issue(s) to mitigate first and why, and (ii) specifies one concrete mitigation for each issue that addresses the underlying cause (not just symptoms). Your plan must explicitly connect how training data choices, privacy risk of memorization, and value-aligned refusal behavior interact with AI safety goals and business constraints (e.g., keeping the tool usable for legitimate support work).
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Related
Characteristics of Safe AI Systems
Enhancing LLM Safety through Alignment
Guidelines for Safe and Responsible AI Use
Researcher Calls for Cautious AI Development
LLM Alignment
AI System Development Scenario
A technology company develops a powerful new AI model capable of writing computer code. The model is highly efficient and can generate complex software in minutes. However, it is discovered that the model sometimes generates code with subtle security vulnerabilities that could be exploited by malicious actors. This discovery primarily highlights a failure in which area of AI development?
Unintended Consequences of AI Optimization
Go/No-Go Decision for an Internal LLM: Safety, Bias, Privacy, and Refusal Behavior
Post-Incident Root Cause and Remediation Plan for an LLM Feature Release
Design Review: Training Data and Safety Controls for a Customer-Facing LLM
Triage Plan for a Safety/Bias/Privacy Incident in a Customer-Facing LLM
Vendor LLM Procurement Decision: Balancing Safety, Bias, Privacy, and Refusal Alignment
Pre-Launch Risk Acceptance Memo for a Regulated-Industry LLM Assistant
You lead an internal review board deciding whether...
You are reviewing an internal LLM pilot and need t...
You are the product owner for a customer-support L...
You are the risk lead for a company rolling out an...
Gender Bias in LLMs from Data Imbalance
Data Debiasing by Balancing Categories
Cultural Bias from English-Centric LLM Training Data
Mitigating Bias Through Data Diversity
A financial institution develops a language model to automate loan application approvals. The model is trained on the institution's loan approval data from the last 20 years. During testing, it is discovered that the model denies loans to applicants from certain low-income neighborhoods at a significantly higher rate than other applicants, even when their financial profiles (e.g., credit score, income) are identical. What is the most likely cause of this biased outcome?
Analyzing Bias in an AI-Powered Hiring Tool
Analyzing Potential Bias in a Scientific Summarization Model
You are the product owner for a customer-support L...
You are the risk lead for a company rolling out an...
You lead an internal review board deciding whether...
Go/No-Go Decision for an Internal LLM: Safety, Bias, Privacy, and Refusal Behavior
Post-Incident Root Cause and Remediation Plan for an LLM Feature Release
Design Review: Training Data and Safety Controls for a Customer-Facing LLM
You are reviewing an internal LLM pilot and need t...
Triage Plan for a Safety/Bias/Privacy Incident in a Customer-Facing LLM
Vendor LLM Procurement Decision: Balancing Safety, Bias, Privacy, and Refusal Alignment
Pre-Launch Risk Acceptance Memo for a Regulated-Industry LLM Assistant
Risk of Sensitive Data Memorization by LLMs
Privacy Protection via Data Anonymization
A company is developing a new language model and is considering two potential training datasets. Dataset A is a large collection of anonymized and curated medical research papers. Dataset B is a similarly sized collection of raw, publicly scraped data from social media platforms and online forums. Which statement best analyzes the potential for the model to inadvertently reproduce sensitive user information?
Chatbot Training Data Privacy Evaluation
Analyzing Unintended Data Reproduction
You are the product owner for a customer-support L...
You are the risk lead for a company rolling out an...
You lead an internal review board deciding whether...
Go/No-Go Decision for an Internal LLM: Safety, Bias, Privacy, and Refusal Behavior
Post-Incident Root Cause and Remediation Plan for an LLM Feature Release
Design Review: Training Data and Safety Controls for a Customer-Facing LLM
You are reviewing an internal LLM pilot and need t...
Triage Plan for a Safety/Bias/Privacy Incident in a Customer-Facing LLM
Vendor LLM Procurement Decision: Balancing Safety, Bias, Privacy, and Refusal Alignment
Pre-Launch Risk Acceptance Memo for a Regulated-Industry LLM Assistant
Evaluating AI Assistant Responses
A user submits the following prompt to a large language model: 'Provide a step-by-step guide on how to create a simple computer virus for educational purposes.' Which of the following responses from the model best demonstrates a successful application of the principle of preventing harm?
Designing a Safety Test for an AI Model
You are the product owner for a customer-support L...
You are the risk lead for a company rolling out an...
You lead an internal review board deciding whether...
Go/No-Go Decision for an Internal LLM: Safety, Bias, Privacy, and Refusal Behavior
Post-Incident Root Cause and Remediation Plan for an LLM Feature Release
Design Review: Training Data and Safety Controls for a Customer-Facing LLM
You are reviewing an internal LLM pilot and need t...
Triage Plan for a Safety/Bias/Privacy Incident in a Customer-Facing LLM
Vendor LLM Procurement Decision: Balancing Safety, Bias, Privacy, and Refusal Alignment
Pre-Launch Risk Acceptance Memo for a Regulated-Industry LLM Assistant