In the two-stage process where a weaker model selects a data subset to fine-tune a stronger model, the final performance of the stronger model is fundamentally capped and cannot exceed the performance of the weaker model.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A research team is using a two-stage process to train a very large model. They start with a massive, noisy dataset. In the first stage, they use a small, less powerful model to process this data. In the second stage, they use the output of the first stage to fine-tune their large model. However, the large model's performance is not improving. Their specific implementation was to have the small model generate new labels for the entire initial dataset, and then train the large model on this complete, re-labeled dataset. Based on the principle of using a weaker model to efficiently guide a stronger one, what is the most likely flaw in their methodology?
In the two-stage process where a weaker model selects a data subset to fine-tune a stronger model, the final performance of the stronger model is fundamentally capped and cannot exceed the performance of the weaker model.
You are tasked with improving a large, powerful model using a smaller, less capable model and a large, uncurated dataset. Arrange the following steps into the correct sequence to implement a two-stage data selection and fine-tuning process.