Learn Before
Concept

Efficiency of Prefix Fine-Tuning

Prefix fine-tuning is highly computationally efficient because it targets a small portion of the model's parameters. Specifically, it introduces an additional L×n×dL \times n \times d parameters, where LL is the number of Transformer layers, nn is the number of prefixes, and dd is the dimensionality of each prefix. Since this added parameter count is significantly smaller than the total number of parameters in the Large Language Model (while the original parameters remain fixed), the fine-tuning process requires much less computational overhead.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related