A development team is using a general-purpose language model to consistently reformat user bug reports into a specific, structured JSON format. Initially, their process requires a very long and complex set of instructions to be included with every bug report sent to the model. To improve this, they create a dataset of 10,000 raw bug reports, each paired with the correctly formatted JSON output. They then use this dataset to conduct additional training on the base model. After this training is complete, what is the most likely and direct consequence for their workflow?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Prefix Fine-Tuning
A development team is using a general-purpose language model to consistently reformat user bug reports into a specific, structured JSON format. Initially, their process requires a very long and complex set of instructions to be included with every bug report sent to the model. To improve this, they create a dataset of 10,000 raw bug reports, each paired with the correctly formatted JSON output. They then use this dataset to conduct additional training on the base model. After this training is complete, what is the most likely and direct consequence for their workflow?
Comparing Model Adaptation Strategies
Comparing Model Adaptation Strategies