Definition

Concatenated Sequence Representation for Multi-Turn Dialogue

To enable the efficient, single-pass training of multi-turn dialogue models, the entire conversation is represented as a single, unified sequence. For a dialogue with K turns, this is achieved by concatenating all user inputs and model responses in chronological order. The formal representation is seq = [x¹, y¹, ..., xᴷ, yᴷ], and the training objective is then based on the log-probability of this complete sequence.

Image 0

0

1

Updated 2025-10-07

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences