Learn Before
Concept

Mechanism of Prompt Tuning at the Embedding Layer

In Large Language Models, symbolic input tokens are first converted into numerical representations called token embeddings. Prompt tuning modifies this initial input by prepending a sequence of trainable vectors, known as pseudo-embeddings (p0,,pnp_0, \dots, p_n), to the standard token embeddings of the user's text. These pseudo-embeddings are not associated with any specific words in the vocabulary but are learned directly to guide the model's behavior for a specific task. The resulting concatenated sequence of pseudo-embeddings and token embeddings serves as the complete input to the LLM.

Image 0

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related