Using Generated Feedback to Prompt for Response Refinement
Once a Large Language Model has generated feedback on its own output, that feedback can be used to construct a new prompt. This subsequent prompt instructs the LLM to revise and improve its original response, forming a key step in an iterative self-refinement cycle.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Using Generated Feedback to Prompt for Response Refinement
Example of a Feedback Generation Prompt
Refining LLM Responses Using Feedback
Refining an LLM Response Using Feedback
A user provides a language model with the following query and receives an initial response:
User Query: "What are the main causes of urban air pollution?" Initial Response: "Urban air pollution is caused by things like cars and factories."
The user now wants to prompt the same model to critique its own response to identify areas for improvement. Which of the following subsequent prompts is best designed to elicit the most detailed and constructive feedback on the initial response?
Evaluating a Feedback Generation Prompt
A user is interacting with a language model to refine an explanation. Arrange the following four steps of their interaction into the correct chronological order.
Learn After
Crafting a Refinement Prompt
Example of a Prompt Template for Response Refinement
An AI model provided an initial response to a prompt and was then instructed to generate feedback on its own work. Based on the information below, which follow-up prompt is best designed to guide the model toward a more comprehensive and refined answer?
Initial Prompt: "Summarize the main causes of the Roman Empire's decline."
Initial Response: "The Roman Empire fell mainly due to barbarian invasions."
Generated Feedback: "This response is overly simplistic. It correctly identifies one factor but fails to mention crucial internal factors such as economic instability, political corruption, and overexpansion."
Arrange the following actions into the correct logical sequence to guide a language model through one cycle of improving its own output.