Learn Before
LLM Adaptation Strategy for a Prompt Improvement Tool
A startup is developing a tool to automatically improve user prompts for an image generation service. They plan to use a single, powerful, off-the-shelf language model to perform two distinct functions: a 'generator' role that creates diverse and detailed prompt variations, and an 'evaluator' role that scores the quality of existing prompts. The startup has a limited budget and a dataset of 10,000 rated prompt-image pairs. Describe a practical strategy for adapting the language model for both the 'generator' and 'evaluator' roles, explaining your reasoning for each.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Limitations of Using Off-the-Shelf LLMs for Prompt Optimization
LLM Adaptation Strategy for a Prompt Improvement Tool
A development team is building a system to automatically improve user-written prompts. For the component that evaluates the quality of a prompt, they decide to adapt a general-purpose language model. The team has a large, high-quality dataset of thousands of prompt-and-quality-score pairs, but they are on a tight deadline and have limited computational resources for training. They opt to adapt the model by providing it with detailed instructions and a few examples of scored prompts within its input context for each evaluation it performs. Which statement best evaluates the team's chosen adaptation strategy?
LLM Adaptation Strategy for a New Product