Learning from Negative Evidence in LLMs
Learning from 'negative evidence' is a technique that activates an LLM's learning capabilities by prompting it to analyze an incorrect example. This process involves a contrastive analysis between a source input and a flawed output, which encourages the model to reflect on the errors and generate a superior result. A key advantage of this method is that it enhances the model's performance within a single prediction using a simple prompt, without needing any explicit feedback.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Learn After
LLM Application in Error Detection and Correction
Simplified Deliberate-then-Generate Method for Deliberation Only
A user wants an AI model to translate the English sentence 'The early bird gets the worm' into formal Spanish. To improve the quality of the translation in a single attempt, the user provides the model with a flawed example. Which of the following prompts most effectively demonstrates the principle of learning from an incorrect example?
Improving LLM Code Generation with Prompting
Designing a Prompt for Enhanced Text Summarization