Learn Before
Concept

Similarity-Preserving Knowledge Distillation

Similar activations of input pairs from teacher networks is transferred into the student network. Pairwise similarities are preserved. (Chen et al., 2021) (Passalis and Tefas, 2019; Pessalis et al., 20201) (Tung and More, 2019)

0

1

Updated 2022-10-22

Tags

Deep Learning (in Machine learning)

Data Science