Learn Before
In the prefix tuning architecture, a sequence of trainable vectors is prepended exclusively to the initial input embeddings of the model. The hidden states of all subsequent layers are then computed based on this modified input, without any further addition of trainable vectors at those deeper layers.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A researcher is implementing a parameter-efficient fine-tuning method for a large language model. The goal is to adapt the model to a new task by introducing a small number of new, trainable parameters while keeping the vast majority of the original model's weights frozen. Which of the following implementation strategies correctly identifies the unique architectural modification central to this specific method?
In the prefix tuning architecture, a sequence of trainable vectors is prepended exclusively to the initial input embeddings of the model. The hidden states of all subsequent layers are then computed based on this modified input, without any further addition of trainable vectors at those deeper layers.
Analyzing an Implementation of a Fine-Tuning Method