Example

Input Sequence Formatting for Span Prediction

To prepare the input for a span prediction model, the query and the context are combined into a single sequence. This is often done by concatenating the query tokens (e.g., xm), a special separator token [SEP], the context tokens (e.g., y1 y2 yn), and another final [SEP] token. This packed sequence, xm [SEP] y1 y2 yn [SEP], is then fed into the model.

0

1

Updated 2026-04-18

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.1 Pre-training - Foundations of Large Language Models