Learn Before
Activity (Process)

Iterative Refinement of Soft Prompts via Transformer Layers

An iterative optimization process for soft prompts involves using the main Transformer layers of a model to refine the prompt vectors over successive steps. In this mechanism, the soft prompts from a given step, denoted as σi\sigma_i, are processed along with the standard input embeddings. The resulting hidden states from the Transformer layers are then used to generate an updated, more optimized set of soft prompts, σi+1\sigma_{i+1}, for the next iteration. This creates a feedback loop where the model progressively improves the soft prompts.

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related