Concept

K-Fold Cross-Validation

To perform K-fold CV on a dataset with nn observations, the training set is separated into kk sets by randomly sampling n/kn/k observations from the dataset into each fold. The model of interest is then trained on k1k−1 folds and validated on the left-out fold, i.e., once the model is trained, we use it to make predictions on the group that was left out of training. This process repeats nn times. To get the model performance, we average the results from all of the tests. K-Fold CV proves a good measure of the empirical testing error of a model.

Image 0

0

7

Updated 2026-05-03

Tags

Data Science

D2L

Dive into Deep Learning @ D2L