logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Using Generated Feedback to Prompt for Response Refinement

Sequence Ordering

Arrange the following actions into the correct logical sequence to guide a language model through one cycle of improving its own output.

0

1

Updated 2025-10-10

Contributors are:

Gemini AI
Gemini AI
🏆 2

Who are from:

Google
Google
🏆 2

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Comprehension in Revised Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related
  • Crafting a Refinement Prompt

  • Example of a Prompt Template for Response Refinement

  • An AI model provided an initial response to a prompt and was then instructed to generate feedback on its own work. Based on the information below, which follow-up prompt is best designed to guide the model toward a more comprehensive and refined answer?

    Initial Prompt: "Summarize the main causes of the Roman Empire's decline."

    Initial Response: "The Roman Empire fell mainly due to barbarian invasions."

    Generated Feedback: "This response is overly simplistic. It correctly identifies one factor but fails to mention crucial internal factors such as economic instability, political corruption, and overexpansion."

  • Arrange the following actions into the correct logical sequence to guide a language model through one cycle of improving its own output.

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github