A diagram illustrates the complete input embedding sequence for a language model, which is constructed from three consecutive segments. Segment 1 is a series of vectors that are directly optimized during model fine-tuning and do not correspond to any specific words. Segment 2 is composed of vectors derived from a fixed, human-readable instruction. Segment 3 contains vectors corresponding to the text provided by the end-user. What is the fundamental difference in the nature and function of Segment 1 compared to Segment 2?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A diagram illustrates the complete input embedding sequence for a language model, which is constructed from three consecutive segments. Segment 1 is a series of vectors that are directly optimized during model fine-tuning and do not correspond to any specific words. Segment 2 is composed of vectors derived from a fixed, human-readable instruction. Segment 3 contains vectors corresponding to the text provided by the end-user. What is the fundamental difference in the nature and function of Segment 1 compared to Segment 2?
A large language model is being configured to use a hybrid prompt. Arrange the following components in the typical order they would be concatenated to form the final input embedding sequence fed into the model.
A diagram of an input to a language model shows a sequence of vectors concatenated from three distinct sources. Match each source component with its correct description based on its nature and function within the sequence.