True/False

When adapting a pre-trained bidirectional language model to serve as the encoder in a sequence-to-sequence architecture for a task like machine translation, it is standard practice to freeze the encoder's parameters and only train the randomly initialized decoder.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science