Activity (Process)

Applying a Pre-trained Encoder to Downstream Tasks

In the application phase, a pre-trained encoder is adapted for a specific downstream task. The process begins by converting an input sequence of tokens, {x_0, ..., x_m}, into their corresponding embeddings, {e_0, ..., e_m}. This embedding sequence is then processed by the pre-trained encoder to produce a sequence of rich vector representations. These representations serve as input features for a separate, task-specific prediction network, which in turn generates the final output required for the application.

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related