Learn Before
Positional Encoding Vector
Positional Encoding, denoted as , is a technique used to inject information about the relative or absolute position of tokens in a sequence. The encoding for a specific position is represented as a vector in a d-dimensional real number space. This is formally expressed as , meaning each position is mapped to a unique high-dimensional vector.

0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Positional Encoding Vector
A data scientist is preparing a dataset to train a model that predicts customer churn. For each of the 50,000 customers in the dataset, they record three features: 'account age in months', 'average monthly spending', and 'number of support tickets filed'. If a single customer is denoted as a data point , which of the following correctly describes its mathematical representation?
Analyzing a Text Representation
Describing a Word Vector
Learn After
A sequence processing model is configured to represent all its internal data using 512-dimensional vectors. To understand the order of items, it generates a unique vector for each position in an input sequence. If the model is given a sequence with 40 items, what are the dimensions of the vector that represents the position of the 25th item?
Diagnosing a Model's Translation Error
Analyzing a Model's Order-Insensitivity