Learn Before
In a language model pre-training setup, a 'generator' network corrupts an input sentence by replacing some tokens. A separate 'discriminator' network is then tasked with identifying which tokens in the corrupted sentence are original and which are replacements. If both networks are trained simultaneously, which statement best distinguishes their respective optimization goals?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
GAN-based Training for Replaced Token Detection
In a language model pre-training setup, a 'generator' network corrupts an input sentence by replacing some tokens. A separate 'discriminator' network is then tasked with identifying which tokens in the corrupted sentence are original and which are replacements. If both networks are trained simultaneously, which statement best distinguishes their respective optimization goals?
Differentiating Training Objectives in a Two-Network Model
Analysis of Joint Training Dynamics in a Two-Network Model