Learn Before
A researcher designs a language model where the final input representation for each word is created by summing a vector for the word's identity and a vector for the sentence it belongs to. However, they intentionally omit the vector that encodes the word's specific position in the sequence. What is the most likely deficiency this model will exhibit?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A researcher designs a language model where the final input representation for each word is created by summing a vector for the word's identity and a vector for the sentence it belongs to. However, they intentionally omit the vector that encodes the word's specific position in the sequence. What is the most likely deficiency this model will exhibit?
Calculating a Final Input Embedding
A common method for creating the final input representation for a token in a sequence involves summing three distinct vectors. Match each vector component to its specific function in this process.