Formula

Formula for Soft Prompt Optimization via Log-Likelihood Maximization

The optimal soft prompt, denoted as σ^\hat{\sigma}, can be found by maximizing the log-probability of the target prediction y^\hat{y} (derived from the full context). This optimization is conditioned on the soft prompt σ\sigma and the original input zz. The formula is expressed as: σ^=argmaxσlogPr(y^σ,z)\hat{\sigma} = \underset{\sigma}{\arg\max}\, \log \text{Pr}(\hat{y}|\sigma, z) This approach frames the optimization problem as a maximum likelihood estimation task, where the goal is to find the prompt that makes the desired output most probable.

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related
Learn After