Formula

Input Composition Formula for Prompt Tuning

The actual input sequence provided to a Large Language Model during prompt tuning is formed by prepending trainable pseudo embeddings to the standard token embeddings. This composition is formally expressed as: p0 p1 ... pntrainable e0 e1 ... emtoken embeddings\underbrace{\mathbf{p}_0\ \mathbf{p}_1\ ...\ \mathbf{p}_n}_{\text{trainable}}\ \underbrace{\mathbf{e}_0\ \mathbf{e}_1\ ...\ \mathbf{e}_m}_{\text{token embeddings}} Here, p0...pn\mathbf{p}_0...\mathbf{p}_n are the trainable soft prompt embeddings optimized for a specific task, and e0...em\mathbf{e}_0...\mathbf{e}_m are the fixed token embeddings representing the user's input.

Image 0

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences