Evaluating a Prompt Template for Instruction Generation
A research team is using a large language model to generate new, diverse programming-related tasks. They provide the model with a seed set of instructions like "Write a Python function to sort a list" and "Create a SQL query to find all users from a specific country." However, the model consistently outputs slight variations of the first example instruction it sees, rather than generating genuinely new and different programming tasks. The team is using the following prompt template, which precedes the list of example instructions: "Here are some programming tasks. Please complete the first task in the list." Based on the principles of effective instruction generation, evaluate why this prompt template is failing to produce novel and diverse instructions. What specific change would you make to the template to better guide the model?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A machine learning engineer wants to use a large language model to expand a dataset of task instructions. The goal is to generate a new, unique instruction that is stylistically and thematically similar to a small set of existing instructions. Which of the following prompts, when combined with the set of existing instructions, is best designed to achieve this specific goal?
Evaluating a Prompt Template for Instruction Generation
Crafting a Directive for Task Generation