Learn Before
Multiple Choice

Two research teams are adapting a large, pre-trained language model for a sentiment analysis task.

  • Team Alpha freezes all the original model weights and prepends a small sequence of trainable vectors to the input text's embeddings. These new vectors are the only parameters updated during training.
  • Team Beta also freezes the original model weights but inserts a small set of trainable vectors into each layer of the model architecture, which are then updated during training.

Based on these descriptions, which team is correctly implementing the technique where adaptation is achieved exclusively by manipulating the input representation fed into the first layer of the model?

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Data Science

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related