Multiple Choice

A development team is updating a pre-trained language model by further training it on a curated dataset of specific prompts and their desired, high-quality outputs (e.g., prompt: 'Explain gravity to a 5-year-old,' output: 'Gravity is like a big, invisible hug from the Earth...'). Why is this specific training process considered a method for model alignment?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science