Problem

Challenge of Articulating Human Preferences for Data Annotation

A primary difficulty in LLM alignment is that human values and expectations are complex and hard to describe. It can be challenging for people to articulate what is ethically correct or culturally appropriate, which makes the task of collecting and annotating fine-tuning data far less straightforward compared to tasks with objective outputs.

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models