Learn Before
Rationale for Parameter Sharing in Positional Bias
Imagine a large language model that uses relative positional biases. One design learns a unique, separate parameter for every possible distance between tokens. A second design shares a single parameter across many large, but similar, distances. Explain why the second design is generally more effective at processing sequences that are significantly longer than any seen during its training phase.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model's attention mechanism uses a relative positional bias. During its training on text segments never exceeding 512 tokens, it learns a unique bias parameter for each specific relative distance from 1 to 63. However, for all distances from 64 to 127, it uses a single shared parameter, and for all distances from 128 to 255, it uses another single shared parameter, and so on. The model is now required to process a document of 2048 tokens. Which statement best analyzes the primary benefit of using shared parameters for larger distances in this scenario?
Model Selection for Long-Sequence Tasks
Rationale for Parameter Sharing in Positional Bias