Learn Before
Concept

Prefix Tuning

Prefix tuning is a parameter-efficient fine-tuning (PEFT) technique for large language models. Instead of fine-tuning all the model's parameters, it keeps the original model frozen and introduces a small number of trainable vectors, called a prefix. This prefix is prepended to the sequence of hidden states at each transformer layer, and only these prefix parameters are optimized during training for a specific task. This approach allows the model to be adapted to new tasks by learning a small, task-specific prefix that steers the behavior of the larger frozen model.

Image 0

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences