Learn Before
Visual Representation of a Hard-Soft Prompt Hybrid
A hybrid prompt structure can be visualized as a concatenated sequence of embeddings fed into a Large Language Model. This sequence is composed of three main parts: a 'Soft Prompt' section with a series of learnable vectors (); a 'Hard Prompt' section with embeddings () derived from discrete, human-readable tokens (); and the 'User Input and Response' section, which consists of embeddings () from the actual input tokens ().

0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Visual Representation of a Hard-Soft Prompt Hybrid
A development team is fine-tuning a language model for a specialized task. They observe two distinct outcomes from their experiments:
- Using only discrete, human-written instructions results in outputs that correctly follow a required format but lack contextual subtlety.
- Using only learnable, continuous vectors as guidance produces more subtle and context-aware outputs, but these outputs frequently deviate from the required format.
Based on these observations, which of the following strategies would be most effective for creating a model that produces outputs that are both structurally correct and contextually subtle?
Prompting Strategy for Legal Document Summarization
Rationale for Hybrid Prompting
Learn After
A diagram illustrates the complete input embedding sequence for a language model, which is constructed from three consecutive segments. Segment 1 is a series of vectors that are directly optimized during model fine-tuning and do not correspond to any specific words. Segment 2 is composed of vectors derived from a fixed, human-readable instruction. Segment 3 contains vectors corresponding to the text provided by the end-user. What is the fundamental difference in the nature and function of Segment 1 compared to Segment 2?
A large language model is being configured to use a hybrid prompt. Arrange the following components in the typical order they would be concatenated to form the final input embedding sequence fed into the model.
A diagram of an input to a language model shows a sequence of vectors concatenated from three distinct sources. Match each source component with its correct description based on its nature and function within the sequence.