Learn Before
Concept

Architectural Preservation by Separating Soft Prompts from LLMs

In contrast to methods like prefix fine-tuning that require modifications to the internal architecture of a Large Language Model, an alternative approach is to separate the soft prompts from the LLM. This separation preserves the original model architecture, which significantly improves deployment efficiency across different tasks since the core model remains unchanged. Prompt tuning is a key example of this method, as it confines its modifications to the input layer, leaving the rest of the model intact.

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related