Learn Before
Inference-Time Compute Scaling for Improved Reasoning
A recent finding in the development of Large Language Models is that increasing the computational resources used during the inference phase can yield substantial enhancements in the model's ability to perform complex reasoning tasks.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.2 Generative Models - Foundations of Large Language Models
Related
Architectural Adaptation of LLMs for Long Sequences
Types of LLM Scaling
Multifaceted Nature of LLM Scaling
Inference-Time Compute Scaling for Improved Reasoning
A research lab has a powerful language model that is highly effective at generating short, creative story paragraphs. The lab now wants to use this model to write entire multi-chapter novels, which requires maintaining plot consistency and character arcs over tens of thousands of words. Which of the following development priorities best represents a shift in scaling dimension to meet this new requirement?
Evaluating a Model Scaling Strategy
Scaling LLMs Beyond Size
Learn After
Optimizing a Language Model for Complex Problem-Solving
A company has a large, pre-trained language model that performs well on general tasks but struggles with complex, multi-step mathematical reasoning problems. The company cannot afford the time or resources to retrain or fine-tune the model. Which of the following strategies best exemplifies using additional computational resources at the time of generating a response to improve the model's reasoning capabilities?
Analyzing Performance Discrepancies in a Pre-Trained Model