Classifying LLM Scaling Strategies
Consider two common methods for improving a Large Language Model's performance: (1) Domain-adaptive fine-tuning and (2) Chain-of-Thought prompting. For each method, identify whether it relies on parameter updates or is an inference-time method. Justify your classification for both.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Evaluating Model Improvement Strategies
Classifying LLM Scaling Strategies
A development team is using a large, pre-trained language model that is computationally expensive to modify. They need to enhance its performance for a specific, temporary project. A key requirement is that any performance enhancement must be easily removable, restoring the model to its original state without needing to store a separate version. Which scaling approach is most suitable for this scenario?