Learn Before
A team is developing a system to automatically generate new instructional tasks for a large language model. The system works by first selecting a few existing tasks from a large pool to serve as examples. In one run, the system selects three examples that are all variations of the same task: 'Sort a list of integers in ascending order.' What is the most probable outcome when these highly similar examples are used to prompt the model to generate a new instruction?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Instruction Generation in Self-Instruct
A team is developing a system to automatically generate new instructional tasks for a large language model. The system works by first selecting a few existing tasks from a large pool to serve as examples. In one run, the system selects three examples that are all variations of the same task: 'Sort a list of integers in ascending order.' What is the most probable outcome when these highly similar examples are used to prompt the model to generate a new instruction?
Critique of a Sampling Strategy for Instruction Generation
Diagnosing a Flaw in an Instruction Generation Pipeline