Short Answer

Multi-Layer Input Composition in Prefix-Tuning

Consider a 3-layer Transformer model where a tuning method involves prepending a unique set of trainable vectors (a 'prefix') to the input of each layer. The input to the first layer (Layer 1) is formed by concatenating its prefix, P1, with the initial sequence embeddings, H0. The output passed from Layer 1 to Layer 2 consists only of the hidden states corresponding to the original sequence, which we'll call H1. Based on this established pattern, describe the precise composition of the complete input that Layer 3 will receive.

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science