Learn Before
Deconstructing the LLM Prediction Formula
A large language model uses the following formula to make a prediction when given a compressed summary of a document (σ) and a specific user query (z):
Break down this formula by explaining what each of the following four components represents in this specific scenario:
σzyŷ_σ
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Formula for Optimizing Soft Prompts via Context Compression
Formula for Soft Prompt Optimization by Minimizing KL Divergence
An LLM is provided with a compressed representation of context, denoted as
σ, and an inputz. The model's goal is to predict the most likely outputy. After processingσandz, the model computes the following conditional probabilities for four possible outputs:- Pr(y='mat' | σ, z) = 0.65
- Pr(y='roof' | σ, z) = 0.25
- Pr(y='sky' | σ, z) = 0.05
- Pr(y='idea' | σ, z) = 0.05
Based on the principle of selecting the output that maximizes the conditional probability, what will the model's final prediction,
ŷ_σ, be?Deconstructing the LLM Prediction Formula
Analyzing an LLM's Incorrect Prediction