Case Study

Impact of the Similarity Function in Soft Prompt Optimization

A researcher is using the formula below to learn an optimal soft prompt, σ, that compresses a longer context, c. The goal is for the model's prediction with the soft prompt (ŷ_σ) to match its prediction with the full context (ŷ).

hat(σ) = argmin_σ s(ŷ, ŷ_σ)

The researcher is considering two different definitions for the similarity function s(·, ·):

  • Function A: A simple mismatch penalty. The function returns 0 if the single most likely output token for ŷ is identical to the single most likely output token for ŷ_σ. Otherwise, it returns 1.
  • Function B: A distributional divergence measure. This function calculates the difference between the entire probability distributions over all possible output tokens produced with the full context and the soft prompt, respectively.

Analyze the likely difference in the behavior of the resulting soft prompt (hat(σ)) when optimized using Function A versus Function B, particularly for tasks that require generating multi-word, coherent answers.

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science