Learn Before
A developer is fine-tuning a language model to summarize news articles. They start with the detailed instruction: 'Read the following news article and generate a concise, neutral, one-paragraph summary that captures the main points.' To improve data creation efficiency, they consider simplifying this instruction for the entire training dataset to just: 'Summarize.' What is the most significant risk associated with using this highly simplified instruction for the entire fine-tuning process?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Simplified Instruction for Chinese Translation
Single-Phrase Instructions for LLMs
Computational Advantages of Simplified Instructions
Harmful Effects of Overly Simplified Instructions on LLM Generalization
Evaluating an Instruction Simplification Strategy
A developer is fine-tuning a language model to summarize news articles. They start with the detailed instruction: 'Read the following news article and generate a concise, neutral, one-paragraph summary that captures the main points.' To improve data creation efficiency, they consider simplifying this instruction for the entire training dataset to just: 'Summarize.' What is the most significant risk associated with using this highly simplified instruction for the entire fine-tuning process?
A user wants a language model to act as a friendly chatbot that answers questions about space exploration. Arrange the following instructions from the most detailed and explicit to the most simplified and concise.