Multiple Choice

An autoregressive model is given an input prompt, x, which is the sequence 'The best movie I ever saw was'. The model has already generated the partial output sequence, y_{<i}, which is 'about a'. The model's next task is to predict the probability of the next token, y_i, based on the standard conditional probability notation Pr(y_i|x, y_{<i}). What is the actual, full sequence of tokens the model uses as its context to make this prediction?

0

1

Updated 2025-10-01

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science