Multiple Choice

A k-Nearest Neighbors Language Model (k-NN LM) is generating text and needs to predict the next token. It queries its datastore and retrieves the 5 nearest reference tokens, along with their corresponding distances: {"river": 0.1}, {"stream": 0.2}, {"river": 0.3}, {"ocean": 0.8}, {"river": 0.9}. How are these retrieved tokens and their distances used to construct a new probability distribution over the model's vocabulary?

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science