Crafting a Refinement Prompt
Imagine you are using a language model to get a summary of a complex topic. You have already received an initial response and then prompted the model to provide feedback on its own work.
Initial Response: 'Quantum entanglement is a physical phenomenon where the quantum state of each particle in a group cannot be described independently of the others. This non-local correlation means a measurement on one particle instantaneously influences the others, even at great distances, challenging classical intuitions.'
Model's Self-Generated Feedback: 'The summary is accurate but uses technical jargon like "non-local correlation" and "classical intuitions" that may not be clear to a general audience. The explanation could be simplified using an analogy.'
Based on the information above, write the next prompt you would use to instruct the model to improve its initial response.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Crafting a Refinement Prompt
Example of a Prompt Template for Response Refinement
An AI model provided an initial response to a prompt and was then instructed to generate feedback on its own work. Based on the information below, which follow-up prompt is best designed to guide the model toward a more comprehensive and refined answer?
Initial Prompt: "Summarize the main causes of the Roman Empire's decline."
Initial Response: "The Roman Empire fell mainly due to barbarian invasions."
Generated Feedback: "This response is overly simplistic. It correctly identifies one factor but fails to mention crucial internal factors such as economic instability, political corruption, and overexpansion."
Arrange the following actions into the correct logical sequence to guide a language model through one cycle of improving its own output.