Concept

Self-supervised Pre-training

Self-supervised pre-training is a paradigm where a model learns from vast quantities of unlabeled data by creating its own supervisory signals directly from the input. For example, it might learn by predicting masked or corrupted parts of the data. Following this pre-training phase, the model is adapted for specific downstream applications, which can be accomplished through methods like supervised fine-tuning on labeled datasets or via prompting to enable zero-shot or few-shot learning.

0

1

Updated 2026-04-18

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related