Learn Before
Core Problem Types in NLP Pre-training
Pre-training in Natural Language Processing primarily addresses two categories of problems: sequence modeling, also referred to as sequence encoding, and sequence generation. Despite their distinct forms, these two problem types can be conceptually unified and described using a single, general model formulation for the sake of simplicity.

0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Foundations of Large Language Models
Related
Types of Pretrained Language Model
Pre-training tasks
Extensions of Pre-trained models
Foundation Models
Historical Context of Pre-training
Examples of Pre-trained Transformers by Architecture
Paradigm Shift in NLP Driven by Pre-training
Future Research Directions in Large-Scale Pre-training
Role of Pre-training in Developing Latent Abilities
Common Data Sources for Pre-training LLMs
Training Auxiliary Parameters with a Fixed Transformer Model
Synergy of Transformers and Self-Supervised Learning
Core Problem Types in NLP Pre-training
Scope of Introductory Discussions on Pre-training
Application of Self-Supervised Pre-training Across Model Architectures
Scope of Foundational Concepts in Pre-training and Adaptation
Tokens vs. Words in NLP
Self-supervised Pre-training
Data Scale Disparity: Pre-training vs. Fine-tuning
A small biotech company wants to build an AI model to classify protein sequences for a very specific function. They have a high-quality, but small, labeled dataset of 10,000 sequences. They have limited computational resources and a tight deadline. Which of the following strategies represents the most effective and efficient approach for them to develop a high-performing model?
Diagnosing a Flawed Model Development Strategy
The development of large-scale AI models typically involves two distinct stages. Match each characteristic below to the stage it describes.
Scope of Introductory Discussion on Pre-training in NLP
Learn After
Sequence Encoding Models
Sequence Generation Models
Architectural Differences Between Sequence Encoding and Generation Models
General Formulation of a Sequence Model
A large language model is pre-trained on a vast text corpus. Its training objective is to take a sentence, randomly mask 15% of the words, and then predict only the original masked words by looking at all the surrounding unmasked words (both to the left and right). Which statement best analyzes the primary goal of this specific pre-training approach?
Analyzing Pre-training Objectives
Match each Natural Language Processing (NLP) task with the primary pre-training problem type it is designed to solve.