Short Answer

Justification for Pre-training Task Classification

Explain why classifying pre-training tasks for language models by their objective (e.g., predicting masked tokens) is considered a more robust and conceptually coherent approach than classifying them by model architecture (e.g., encoder-only). Provide a specific example to support your reasoning.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science