Multiple Choice

In a specific parameter-efficient tuning method, each layer of a transformer is adapted by prepending a sequence of new, trainable vectors to the sequence of hidden states from the previous layer. Suppose for a given layer, the sequence of these new trainable vectors has a length of 20, and the sequence of hidden states corresponding to the original text input has a length of 128. After this layer processes the combined sequence, a new set of hidden states is generated. How is the complete hidden state sequence for the next layer constructed?

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science