Concept

Mixture of Expert in Dynamic Neural Network

Mixture of Expert (MoE) takes the advantage of increased amount of parameters without increasing the computation load. A layer in such a model consists of several expert network, and only part of them will be activated during inference. One key component of MoE model is the routing mechanism, which can be divided into two types: learned routing and unlearnable routing.

0

1

Updated 2022-06-25

Contributors are:

Who are from:

Tags

Data Science

Related