Learn Before
Architectural Approaches to Self-Supervised Pre-training
Self-supervised pre-training can be examined through the lens of different neural network designs, specifically decoder-only, encoder-only, and encoder-decoder architectures. The focus of such analysis is often narrowed to the Transformer architecture, as it underpins the vast majority of modern pre-trained models in the field of Natural Language Processing.

0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Comparison of Self-Supervised Pre-training and Self-Training
Architectural Categories of Pre-trained Transformers
Self-Supervised Classification Tasks for Encoder Training
Prefix Language Modeling (PrefixLM)
Mask-Predict Framework
Discriminative Training
Learning World Knowledge from Unlabeled Data
Emergent Linguistic Capabilities from Pre-training
Architectural Approaches to Self-Supervised Pre-training
Self-Supervised Pre-training of Encoders via Masked Language Modeling
Word Prediction as a Core Self-Supervised Task
Learning World Knowledge from Unlabeled Data via Self-Supervision
A research team has a massive collection of unlabeled historical texts. Their goal is to pre-train a language model that understands the specific vocabulary and sentence structures within these documents, but they have no budget for manual data annotation. Which of the following approaches is the most effective and feasible for their pre-training task?
Analysis of Supervision Signal Generation
A team is developing a pre-training strategy for a new language model using a large corpus of unlabeled text. Which of the following proposed tasks best exemplifies the principles of self-supervised learning?
Prevalence of Self-Supervised Pre-training in NLP
Learn After
A research team is building a model designed specifically for summarizing long scientific articles into a few concise paragraphs. The model must be able to process the entire source article to understand its full context before generating the summary. Given this requirement for a sequence-to-sequence task, which architectural approach would be the most effective choice for the model's pre-training and fine-tuning?
Match each architectural approach for self-supervised pre-training with the category of tasks it is primarily designed to handle.
Evaluating Architectural Choices for a Chatbot