Learn Before
Comparison

Comparison of Prompt Tuning and Prefix Fine-Tuning

Both prompt tuning and prefix fine-tuning incorporate trainable vectors to adapt Large Language Models (LLMs) to specific tasks. However, prefix fine-tuning alters the model extensively by adding trainable vectors to every layer of the Transformer, which requires modifications to the LLM. In contrast, prompt tuning separates soft prompts from the LLMs by modifying only the input embedding layer. This preserves the original model architecture, making prompt tuning more efficient for deployment across different tasks without the need to adjust the core model.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related