Learn Before
Resource Allocation for Model Improvement
An NLP research lab wants to improve the performance of its existing pre-trained, bidirectional language model but has a fixed budget for the project. Two competing proposals are on the table. Based on the principles of scaling such models, which proposal is more likely to yield significant improvements in general language understanding, and why?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
RoBERTa
A research team aims to enhance the general language understanding capabilities of a pre-trained, bidirectional language model. Their plan is to double the model's parameter count but retrain it on the same, original dataset due to resource limitations. Which statement best evaluates the likely outcome of this approach?
Resource Allocation for Model Improvement
Evaluating Model Scaling Strategies
Improving BERT Models by Increasing Parameters