Concept

Sufficiency of Learned Features for Future Token Prediction

An area of investigation in long-context language modeling is whether the features learned by a model up to a given point are sufficient for predicting subsequent tokens. This research explores the efficiency and foresight of the model's internal representations.

0

1

Updated 2026-04-29

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models