Learn Before
Concept

Inductive Bias in Nearest Neighbors

In the 11-nearest neighbor algorithm, the required distance function dd, or equivalently the vector-valued basis function ϕ(x)\phi(\mathbf{x}) used to featurize the data, encodes the model's inductive bias. While any distance metric allows the model to achieve zero training error and eventually converge to an optimal predictor, different choices of dd represent different underlying assumptions about the data patterns. Consequently, with a finite amount of available data, these different inductive biases will yield different predictors, and their performance will depend on how compatible their assumptions are with the observed data.

0

1

Updated 2026-05-06

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L