Model Architecture Design Choice
A data scientist is working on two separate natural language processing projects. Both projects use a model architecture where all input tokens are processed in parallel, rather than one after another.
- Project A: A sentiment analysis model for customer reviews. The model's first step is to identify the presence of specific positive and negative keywords from a predefined list, treating the review as an unordered collection of these keywords.
- Project B: A machine translation system that translates English sentences into French, where word order is critical for grammatical correctness and meaning.
For which project is the inclusion of position-specific information vectors (which are added to the initial token vectors) an essential design choice? Justify your answer by explaining why it is necessary for one project but not for the other.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Self-Attention layer understanding - Step 5 - Adding the time
Input Embedding with Positional Encoding
Learnable Absolute Positional Embeddings
Initial Input Representation for Transformer Layers
Comparison of Arbitrary Order Prediction and Masked Language Modeling
An engineer builds a language model where all input words in a sentence are processed simultaneously and independently before their information is combined. When testing the model with the sentences 'The cat chased the dog' and 'The dog chased the cat', the engineer observes that the model generates identical internal representations for both, failing to capture their different meanings. Which of the following modifications would most directly address this fundamental flaw?
Model Architecture Design Choice
Analyzing Order-Insensitivity in Language Models