Learn Before
Concept

Prefix Tuning (Deep Prompt Tuning)

Prefix Tuning, also known as Deep Prompt Tuning, is a parameter-efficient fine-tuning method where a sequence of trainable vectors, called prefixes, is prepended to the hidden states at each layer of a Large Language Model. As illustrated in the diagram, for each layer ll, the prefix vectors (p0l,p1l,p_0^l, p_1^l, \dots) are concatenated with the hidden states of the user input (h0l,h1l,h_0^l, h_1^l, \dots). These prefixes are the only parameters updated during training; they are optimized directly through backpropagation based on a task-specific loss function, steering the model's behavior without modifying the original LLM weights.

Image 0

0

1

Updated 2025-10-07

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related