Learn Before
Concept
K-Fold Cross-Validation
To perform K-fold CV on a dataset with observations, the training set is separated into sets by randomly sampling observations from the dataset into each fold. The model of interest is then trained on folds and validated on the left-out fold, i.e., once the model is trained, we use it to make predictions on the group that was left out of training. This process repeats times. To get the model performance, we average the results from all of the tests. K-Fold CV proves a good measure of the empirical testing error of a model.

0
7
Updated 2026-05-03
Tags
Data Science
D2L
Dive into Deep Learning @ D2L