Learn Before
Multiple Choice

A transformer model's architecture specifies that its token embeddings have a dimensionality of 768. This model uses a rotational method to encode positional information, where the transformations are governed by a specific parameter vector. Based on this information, how many components does this parameter vector contain?

0

1

Updated 2025-10-01

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science