A large language model has been fine-tuned on a variety of instructional tasks. Match each of the following performance observations with the specific type of generalization challenge it represents.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Two Levels of Generalization in Instruction-Tuned LLMs
Complexity of Generalization due to Instruction and Input Variation
A development team fine-tunes a large language model to be a helpful assistant for summarizing legal documents. They use a large dataset of legal texts and their corresponding summaries. After deployment, they observe the following:
- The model performs exceptionally well when asked to summarize new, unseen legal documents (e.g., contracts, court rulings).
- However, when users give it slightly different instructions, such as 'Explain this legal clause in simple terms,' 'Extract the key dates from this document,' or 'Translate this legal paragraph into French,' the model's performance is poor and unreliable.
Based on this scenario, which statement best analyzes the model's generalization capabilities?
Evaluating Fine-Tuning Strategies for Generalization
Performance Metric for Instruction-Tuned LLMs
Formal Representation of an Instruction-Tuned LLM
A large language model has been fine-tuned on a variety of instructional tasks. Match each of the following performance observations with the specific type of generalization challenge it represents.