Learn Before
A financial services company deploys a new AI-powered chatbot to answer customer questions. A user discovers that by asking the chatbot a series of seemingly innocent, but slightly unusual, questions about account policies, they can trick the chatbot into revealing another user's private account balance. The chatbot was not explicitly programmed to handle this specific sequence of questions. Which characteristic of a safe AI system is most clearly compromised in this scenario?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A financial services company deploys a new AI-powered chatbot to answer customer questions. A user discovers that by asking the chatbot a series of seemingly innocent, but slightly unusual, questions about account policies, they can trick the chatbot into revealing another user's private account balance. The chatbot was not explicitly programmed to handle this specific sequence of questions. Which characteristic of a safe AI system is most clearly compromised in this scenario?
Evaluating an AI Diagnostic Tool
Match each characteristic of a safe AI system with the scenario that best illustrates a failure of that characteristic.