Concept

Optimization Goal for Soft Prompt Learning via Context Compression

When learning a soft prompt, denoted as σ\sigma, through context compression, the objective is to make the model's predictions for a given input, zz, nearly identical whether using the compact soft prompt σ\sigma or the original, longer context cc. This is achieved by minimizing the difference between the outputs generated under these two conditions. This optimization goal can be formalized with a mathematical expression that seeks the prompt σ\sigma that minimizes the dissimilarity between the two predictions.

Image 0

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models