Insufficiency of Prompting for Tasks Outside an LLM's Pre-training Scope
The effectiveness of prompting is fundamentally constrained by the knowledge an LLM acquired during pre-training. If a task requires knowledge from a domain the model was not trained on, such as a specific language, even sophisticated prompt engineering will fail to produce good results. The appropriate remedy in this situation is to augment the model's training with data from the target domain.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Insufficiency of Prompting for Tasks Outside an LLM's Pre-training Scope
Diagnosing In-Context Learning Failure
A research team is using a powerful, general-purpose language model for a highly specialized task: translating ancient, obscure texts. The team has invested significant effort in creating detailed prompts with multiple high-quality examples of translations. Despite this, the model's outputs are consistently nonsensical. Which of the following statements provides the most accurate evaluation of this situation?
A sufficiently well-engineered prompt, complete with numerous high-quality examples, can overcome the limitations of a language model's pre-training data to successfully perform any new task.
Learn After
Example of Prompting Failure: Inuktitut Translation
A biotech startup is using a large language model, pre-trained on a general corpus of web text, to analyze and summarize highly specialized research papers on a newly discovered protein family. Despite hiring expert prompt engineers who have tried hundreds of complex, detailed prompts, the model's summaries are frequently inaccurate and miss crucial details. What is the most likely reason for this failure, and what is the most appropriate next step?
Evaluating a Claim about Prompt Engineering
Legacy Code Documentation Failure