Learn Before
Troubleshooting a Prompting Strategy
A developer is trying to get a large language model to extract the main subject from a user's question and format it as a single keyword. They provide the model with several examples within the prompt before giving it a new, unseen question. However, the model's output for the new question is unreliable. Analyze the examples provided and identify the most significant flaw that prevents the model from learning the desired task. Explain your reasoning.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.5 Inference - Foundations of Large Language Models
Ch.1 Pre-training - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of In-Context Learning for Sentiment Classification
Example of an Instructional Prompt in a Few-Shot Setting for Sub-Problem Decomposition
Troubleshooting a Prompting Strategy
Demonstrations in In-Context Learning
A developer wants a language model to consistently translate informal text messages into a formal, professional tone. The goal is to guide the model's output by showing it examples of the desired transformation directly within the query, without altering the model's permanent parameters. Which of the following inputs best applies this in-context learning method?
Analyzing a Prompt's Structure for In-Context Task Learning
A developer is constructing a prompt to teach a language model a new task by providing examples directly in the input. Match each component of the prompt to its specific role in this in-context learning process.
Failure of Standard Few-Shot Prompting for Average Calculation