Concept

Shift from Fine-Tuning to Prompting with Pre-trained Models

The widespread adoption of modern prompting began with the emergence of large pre-trained models like BERT. Initially, adapting these models for specific downstream tasks was primarily done through fine-tuning. However, researchers discovered that by adding specific words or sentences as 'prompts' to the input, models could be guided to perform tasks effectively without requiring extensive fine-tuning.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Computing Sciences

Foundations of Large Language Models Course

Ch.3 Prompting - Foundations of Large Language Models