Learn Before
Visual Representation of Hard vs. Soft Prompts
A hard prompt, such as the instruction 'Translate the sentence into Chinese', can be seen as a discrete token sequence like . When fed into a Large Language Model, these tokens are transformed into a sequence of real-valued vectors, such as . These intermediate hidden states within the model's embedding space can be conceptually viewed as a soft prompt, illustrating the fundamental difference between human-readable text and the continuous representations used internally by the model.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Related
Visual Representation of Hard vs. Soft Prompts
Lack of Interpretability in Soft Prompts
Inflexibility of Soft Prompts
Selecting a Prompting Strategy for a New AI Application
Match each characteristic to the type of prompt it best describes.
A research team is developing a language model for a highly specialized and stable task where maximizing performance is the absolute priority. The team has access to a large dataset and significant computational resources for training, but they are less concerned with the human-readability of the model's internal guidance mechanisms. Given these conditions, which prompting approach would be more suitable, and why?
Learn After
An AI researcher is examining two different methods for guiding a language model. Method 1 involves prepending the text 'Summarize the following article:' to the input. Method 2 involves inserting a sequence of trainable, continuous numerical vectors directly into the model's embedding space, which do not correspond to any actual words. Which statement correctly analyzes the representation of these two methods?
Analysis of Prompt Representations
A language model's input can be guided in different ways. Analyze the following descriptions of how these guides are represented and match each one to the correct term.