Learn Before
The objective for fine-tuning a pre-trained model is formally expressed as: Match each component of this objective function to its correct description.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Ch.1 Pre-training - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Fine-Tuning as Maximum Likelihood Estimation
A machine learning engineer is adapting a large, pre-trained language model for a new text classification task. They have a labeled dataset D containing pairs of text inputs (x) and their correct labels (y_gold). The engineer formulates the following objective for the adaptation process, where θ represents the model parameters which are initialized randomly:
What is the primary conceptual error in this formulation for the specific goal of adapting the pre-trained model?
Notation for Parameters in the Fine-Tuning Process
Comparing Optimization Objectives in Model Training
The objective for fine-tuning a pre-trained model is formally expressed as: Match each component of this objective function to its correct description.
Application Formula for Fine-Tuned BERT Models