Manual Feedback Generation in Self-Refinement
One approach to gathering feedback involves humans manually reviewing the model's predictions to identify any errors or areas needing improvement.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Manual Feedback Generation in Self-Refinement
Automated Feedback Generation in Self-Refinement
A startup is developing a language model for a highly specialized domain, such as interpreting complex legal documents. For their initial self-refinement cycle, they prioritize obtaining the most accurate and nuanced feedback possible, even if the process is slow and resource-intensive. Which approach to generating feedback best aligns with these priorities?
For a system that improves its own outputs, there are different ways to get feedback. Match each method of generating feedback with its primary characteristic.
Feedback Strategy for a High-Volume Chatbot
Learn After
A language model is tasked with summarizing a news article about a city council meeting that approved a new public park project after a lengthy debate over budget and location. The model produces the following summary: 'The city council had a meeting.' Which of the following pieces of human-provided feedback would be most effective for improving the model's future summarization capabilities?
Evaluating Feedback Strategies for Model Refinement
Crafting Effective Manual Feedback for a Language Model