Multiple Choice

An AI engineer is adapting a language model that was originally trained to handle sequences of 2000 tokens. The model uses a positional encoding method where each token's embedding is rotated by an angle corresponding to its position. The goal is to enable the model to process sequences up to 8000 tokens without a full retraining. The underlying mathematical principle of this encoding method states that applying a scaled rotation is equivalent to applying the original rotation with a transformed angle. Given this principle, what is the most direct and efficient strategy for the engineer to implement?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related