Diagnosing In-Context Learning Failure
Based on the provided scenario, identify the most probable cause for the model's poor performance and explain why the team's current approach is insufficient.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Insufficiency of Prompting for Tasks Outside an LLM's Pre-training Scope
Diagnosing In-Context Learning Failure
A research team is using a powerful, general-purpose language model for a highly specialized task: translating ancient, obscure texts. The team has invested significant effort in creating detailed prompts with multiple high-quality examples of translations. Despite this, the model's outputs are consistently nonsensical. Which of the following statements provides the most accurate evaluation of this situation?
A sufficiently well-engineered prompt, complete with numerous high-quality examples, can overcome the limitations of a language model's pre-training data to successfully perform any new task.