Multiple Choice

A language model using a rotational mechanism for positional information processes two different sentences:

Sentence A: "...the powerful model at position 4 now predicts at position 6..." Sentence B: "...we see the powerful model at position 15 now predicts at position 17..."

In both sentences, the relative distance between 'model' and 'predicts' is 2. Based on the principles of this rotational encoding method, what is the most accurate conclusion about the vector embeddings for 'model' and 'predicts'?

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science