Learn Before
Optimizing a Model's Training Strategy
An engineer is preparing bilingual sentence pairs for a model. The process involves taking a packed sequence (e.g., [CLS] sentence_one [SEP] sentence_two [SEP]) and randomly replacing some words with a [MASK] symbol. The model is then trained to predict the original words. The engineer notices the model becomes very good at predicting common grammatical words (like 'is', 'a', 'the') but performs poorly on important content-carrying words (like specific nouns and verbs). Describe a modification to the word replacement strategy that would compel the model to improve its performance on these content-carrying words. Justify your reasoning.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Transformer Encoding of a Masked Bilingual Sentence Pair
A model is being prepared to understand relationships between aligned sentences in different languages. An input sequence is created by joining a Spanish sentence and its English translation. To train the model to predict missing words, some original words are replaced with a special
[MASK]symbol. Given the original packed sequence below, which option correctly demonstrates this replacement process?Original Sequence:
[CLS] El gato se sentó en la alfombra . [SEP] The cat sat on the mat . [SEP]Optimizing a Model's Training Strategy
Evaluating a Masking Strategy for Specialized Translation