Learn Before
Concept

Core Issue of Few-Shot Learning

Let H\mathcal{H} be the hypothetical model space defined by Few-Shot Learning, h^\hat{h} be the model that minimize expected risk (possibly not a Few-Shot earning model), hh^{*} be the model in H\mathcal{H} that minimizes expected risk, hIh_I be the model in H\mathcal{H} that minimizes empirical risk, then the total error can be decomposed as E[R(hI)R(h^)]=E[R(h)R(h^)]+E[R(hI)R(h)]\mathcal{E}[R(h_I)-R(\hat{h})]=\mathcal{E}[R(h^*)-R(\hat{h})]+\mathcal{E}[R(h_I)-R(h^*)].

The first term is Approximation Error which is affected by H\mathcal{H}, and the second term is Estimation Error which is affected by the quantity and quality of training data.

The problem of few-shot learning is that the training sample size is small, thus causing higher estimation error, and resulting in unreliable hIh_I.

0

1

Updated 2022-05-22

Contributors are:

Who are from:

Tags

Deep Learning (in Machine learning)

Data Science