Learn Before
Comparison of Optimizing Prompt Instructions vs. Demonstrations
A key difference exists in the difficulty of optimizing prompt instructions versus demonstrations. Generating high-quality demonstrations with LLMs is relatively straightforward, shifting the optimization challenge to sampling the best examples from a candidate pool. Conversely, learning instructions is harder because LLMs struggle to predict instruction quality, and the required evaluation process is computationally expensive.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Challenges in Optimizing Prompt Instructions
Comparison of Optimizing Prompt Instructions vs. Demonstrations
A team is developing an automated system to improve a Large Language Model's performance on a text classification task. The goal is to make the model more accurate at identifying the primary topic of a given document. Which of the following strategies best represents the specific process of optimizing the prompt's instructions?
Evaluating an Optimized Prompt Instruction for Summarization
Analyzing the Effectiveness of Prompt Instructions
Learn After
When designing prompts for a large language model, why is the process of optimizing the instructions generally considered a more significant computational challenge than optimizing the demonstrations (examples)?
Prompt Optimization Strategy Selection
When automatically optimizing a prompt, the main challenge associated with the demonstrations (examples) is the difficulty of generating high-quality candidates, whereas the main challenge for the instructions is the difficulty of sampling the best ones from a large generated pool.