Short Answer

Rationale for Parameter Sharing in Positional Bias

Imagine a large language model that uses relative positional biases. One design learns a unique, separate parameter for every possible distance between tokens. A second design shares a single parameter across many large, but similar, distances. Explain why the second design is generally more effective at processing sequences that are significantly longer than any seen during its training phase.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science