Learn Before
Failure of Standard Few-Shot Prompting for Average Calculation
Providing a Large Language Model with a standard few-shot prompt that only includes question-answer pairs without detailed problem-solving steps is often insufficient for complex reasoning tasks. For example, demonstrating that the average of , , , and is does not effectively teach the model how to calculate averages. When subsequently asked to calculate the average of , , and , the model remains unable to derive the correct calculation path and may incorrectly output instead of the correct answer.
0
1
Tags
Foundations of Large Language Models
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Example of In-Context Learning for Sentiment Classification
Example of an Instructional Prompt in a Few-Shot Setting for Sub-Problem Decomposition
Troubleshooting a Prompting Strategy
Demonstrations in In-Context Learning
A developer wants a language model to consistently translate informal text messages into a formal, professional tone. The goal is to guide the model's output by showing it examples of the desired transformation directly within the query, without altering the model's permanent parameters. Which of the following inputs best applies this in-context learning method?
Analyzing a Prompt's Structure for In-Context Task Learning
A developer is constructing a prompt to teach a language model a new task by providing examples directly in the input. Match each component of the prompt to its specific role in this in-context learning process.
Failure of Standard Few-Shot Prompting for Average Calculation