Learn Before
Example

Illustration of Prefix Fine-Tuning

Prefix fine-tuning can be illustrated through tasks like translation. During training, sequences of trainable prefix vectors, such as p0l\mathbf{p}_0^l and p1l\mathbf{p}_1^l, are prepended to the beginning of each Transformer layer and are updated by receiving error gradients from the output. By adjusting these vectors, the model adapts to the specific task, allowing the prefixes to serve as prompts that activate the Large Language Model (LLM) without needing explicit text instructions (e.g., "Translate the following sentence"). At test time, the optimized prefix vectors are prepended to the layers to successfully perform the task.

Image 0

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Computing Sciences

Foundations of Large Language Models Course