Learn Before
Optimizing a Language Model for Real-Time Translation
Based on the scenario, what specific modification to the model's attention mechanism would you propose to address the performance issue? Explain why this change would be effective by describing the relationship between the set of attended-to tokens and computational cost.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer is designing a text-generation model and is considering two different configurations for how each new token attends to previous tokens in the sequence.
- Configuration A: Each new token computes attention scores with only the 16 most recent tokens in the sequence.
- Configuration B: Each new token computes attention scores with all preceding tokens up to a maximum of 512.
Which statement best analyzes the primary trade-off between these two configurations?
Analyzing Sparse Attention Trade-offs
Optimizing a Language Model for Real-Time Translation
In a sparse attention model, expanding the index set
Gto include more preceding tokens for each query will result in a higher degree of model sparsity and a reduction in computational cost.