Learn Before
Historical Context and Computational Challenges of Maximum Probability Prediction
The objective of finding the output ŷ that maximizes Pr(y|x) is a foundational concept in NLP, with historical roots in early probabilistic models for speech recognition and machine translation. While solving this optimization problem can be straightforward for simple tasks, such as token prediction with a very small language model, it poses significant computational hurdles in most practical LLM scenarios. These difficulties, which stem from both computing the conditional probability Pr(y|x) and searching the vast output space for the argmax, are recognized as fundamental and well-studied problems in the broader field of artificial intelligence, allowing for the application of established techniques.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Hypothesis in LLM Inference
Mathematical Formulation of the Search Problem in LLM Inference
Exploration vs. Exploitation in LLM Search
Search Tree Structure in Token Generation
Heuristic Search Algorithms for LLM Inference
Efficient Generation of Candidate Solutions via Search Algorithms
Search for Optimal or Sub-optimal Sequences in LLM Inference
Root of the Search Space as a Representation of Input (x)
A text generation model has a vocabulary of 10,000 possible words it can choose from for each position in a sequence. If this model were to find the optimal output by evaluating every single possible sequence, how would the total number of sequences to check change if the desired output length is increased from 3 words to 5 words?
Evaluating an Inference Strategy
The Impracticality of Exhaustive Search
Historical Context and Computational Challenges of Maximum Probability Prediction
Mathematical Representation of an Output Sequence
Learn After
Model-Specific Optimizations for LLM Inference
Modeling and Efficient Computation of Conditional Token Probabilities
Efficient Generation of Candidate Solutions via Search Algorithms
An AI research team is developing a new generative model for creating complex musical compositions. They find that while their model can accurately calculate the probability of any given short musical phrase, generating a full, high-quality, multi-minute symphony is computationally intractable because they cannot feasibly check every possible combination of notes to find the absolute best one. How does this team's challenge relate to the broader field of artificial intelligence?
Comparing Computational Challenges in AI Tasks
Identifying Common Computational Structures in AI
Accuracy-Efficiency Trade-off in LLM Inference