Learn Before
A development team is fine-tuning a language model for a specialized task. They observe two distinct outcomes from their experiments:
- Using only discrete, human-written instructions results in outputs that correctly follow a required format but lack contextual subtlety.
- Using only learnable, continuous vectors as guidance produces more subtle and context-aware outputs, but these outputs frequently deviate from the required format.
Based on these observations, which of the following strategies would be most effective for creating a model that produces outputs that are both structurally correct and contextually subtle?
0
1
Tags
Data Science
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Visual Representation of a Hard-Soft Prompt Hybrid
A development team is fine-tuning a language model for a specialized task. They observe two distinct outcomes from their experiments:
- Using only discrete, human-written instructions results in outputs that correctly follow a required format but lack contextual subtlety.
- Using only learnable, continuous vectors as guidance produces more subtle and context-aware outputs, but these outputs frequently deviate from the required format.
Based on these observations, which of the following strategies would be most effective for creating a model that produces outputs that are both structurally correct and contextually subtle?
Prompting Strategy for Legal Document Summarization
Rationale for Hybrid Prompting