Prompting an LLM for Feedback on a Generated Response
Following the generation of an initial response by a Large Language Model, a subsequent prompt can be employed to have the same LLM evaluate and provide feedback on that output. This creates an iterative loop for refining the generated content.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Prompting an LLM for Feedback on a Generated Response
Example of a Prompt to Generate an Initial Response for Self-Refinement
An engineer is designing a two-stage system. In the first stage, a language model generates a draft of a summary for a long article. In the second stage, the model is prompted again to critique and improve that initial draft. Which of the following prompts is most suitable for the first stage of this process?
Role of the Initial Response in a Self-Refinement Loop
A developer is creating an automated process for a language model to improve its own output for a given task. Arrange the following stages of this process in the correct logical order.
Learn After
Using Generated Feedback to Prompt for Response Refinement
Example of a Feedback Generation Prompt
Refining LLM Responses Using Feedback
Refining an LLM Response Using Feedback
A user provides a language model with the following query and receives an initial response:
User Query: "What are the main causes of urban air pollution?" Initial Response: "Urban air pollution is caused by things like cars and factories."
The user now wants to prompt the same model to critique its own response to identify areas for improvement. Which of the following subsequent prompts is best designed to elicit the most detailed and constructive feedback on the initial response?
Evaluating a Feedback Generation Prompt
A user is interacting with a language model to refine an explanation. Arrange the following four steps of their interaction into the correct chronological order.