Examples of Pre-trained Transformers by Architecture
Encoder only -> natural language understanding
- BERT
- RoBERTa (removes NSP task from BERT)
Decoder only -> language modeling
- GPT series
Encoder-Decoder -> equipped with the ability to perform both natural language understanding and generation
- BART
- T5
0
1
Tags
Data Science
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.2 Generative Models - Foundations of Large Language Models
Related
Types of Pretrained Language Model
Pre-training tasks
Extensions of Pre-trained models
Foundation Models
Historical Context of Pre-training
Examples of Pre-trained Transformers by Architecture
Paradigm Shift in NLP Driven by Pre-training
Future Research Directions in Large-Scale Pre-training
Role of Pre-training in Developing Latent Abilities
Common Data Sources for Pre-training LLMs
Training Auxiliary Parameters with a Fixed Transformer Model
Synergy of Transformers and Self-Supervised Learning
Core Problem Types in NLP Pre-training
Scope of Introductory Discussions on Pre-training
Application of Self-Supervised Pre-training Across Model Architectures
Scope of Foundational Concepts in Pre-training and Adaptation
Tokens vs. Words in NLP
Self-supervised Pre-training
Data Scale Disparity: Pre-training vs. Fine-tuning
A small biotech company wants to build an AI model to classify protein sequences for a very specific function. They have a high-quality, but small, labeled dataset of 10,000 sequences. They have limited computational resources and a tight deadline. Which of the following strategies represents the most effective and efficient approach for them to develop a high-performing model?
Diagnosing a Flawed Model Development Strategy
The development of large-scale AI models typically involves two distinct stages. Match each characteristic below to the stage it describes.
Scope of Introductory Discussion on Pre-training in NLP
Examples of Pre-trained Transformers by Architecture
Architectural Suitability for Language Processing Tasks
A research team is developing a system to automatically summarize long scientific articles. The system needs to first read and understand the entire source article and then generate a concise, new paragraph that captures the key findings. Which architectural category of pre-trained model is fundamentally best suited for this kind of sequence-to-sequence task?
Match each pre-trained model architectural category to the description of its primary information processing design.
Learn After
BERT
BART
T5
BERT (Bidirectional Encoder Representations from Transformers)
RoBERTa
GPT Series
LLaMA2
DeepSeek-V3
Falcon
Mistral
PaLM-450B
Gemma-7B
Gemma2
A software development team is tasked with building a feature that can automatically generate a concise, one-paragraph summary from a long news article. The system needs to first comprehend the full context of the source article and then generate a new, coherent summary. Based on the typical strengths of different foundational model designs, which of the following models would be the most suitable choice for this specific task?
Match each pre-trained model with the description that best fits its architectural design and primary use case.
Evaluating Model Architecture Selection for a Classification Task