AI Research Funding Decision
Imagine it is the early 2010s. A research institution has a fixed budget for its next major Natural Language Processing project. The team is debating two proposals:
Proposal A: Use the entire budget to significantly increase the scale of their existing, successful model. This involves doubling the number of parameters and doubling the amount of training text.
Proposal B: Use the budget to develop a completely new, more intricate model architecture, keeping the model size and training data volume the same as their previous project.
Based on the conventional wisdom of that era regarding the relationship between model scale and performance improvement, which proposal would a skeptical funding committee most likely have favored, and what would be their primary justification?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
AI Research Funding Decision
A natural language processing research team in the early 2010s is deciding how to allocate a significant budget increase. They can either use it to increase their training dataset size by a factor of ten or to fund a project to design a more complex model architecture. Based on the conventional wisdom of that era, which of the following arguments would most persuasively justify choosing the architectural project over the massive data increase?
The conventional wisdom in natural language processing, prior to the advent of very large models, held that continuously increasing the size of a model and its training data would lead to proportionally consistent and predictable improvements in performance.