Multiple Choice

An LLM is provided with a compressed representation of context, denoted as σ, and an input z. The model's goal is to predict the most likely output y. After processing σ and z, the model computes the following conditional probabilities for four possible outputs:

  • Pr(y='mat' | σ, z) = 0.65
  • Pr(y='roof' | σ, z) = 0.25
  • Pr(y='sky' | σ, z) = 0.05
  • Pr(y='idea' | σ, z) = 0.05

Based on the principle of selecting the output that maximizes the conditional probability, what will the model's final prediction, ŷ_σ, be?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science