Learn Before
Concept
Quantized Distillation
Network quantization reduces a neural network’s computational complexity by converting high-precision networks to low precision. A high-precision teacher network, quantized on feature maps, distills knowledge to a low-precision network. This has been recently used with self-distillation training schemes, where the teacher and student parameters are the same.
0
1
Updated 2022-10-29
Tags
Deep Learning (in Machine learning)