Formula

Visual Representation of Input Composition in Prompt Tuning

In prompt tuning, the input fed to the Large Language Model is a single sequence formed by concatenating trainable prompt embeddings with the standard token embeddings of the input text. This composition can be represented as the sequence {\mathbf{p}_0, \mathbf{p}_1, \dots, \mathbf{p}_n, \mathbf{e}_0, \mathbf{e}_1, \dots, \mathbf{e}_m}, where the {\mathbf{p}_i} vectors constitute the trainable soft prompt and the {\mathbf{e}_j} vectors are the embeddings for the input tokens. The prompt embeddings are learnable parameters adjusted for a specific task, while the token embeddings remain frozen.

Image 0

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences