Learn Before
Concept

Benefits of Distributed Representations

Can have a statistical advantage when an apparently complicated structure can be compactly represented with a small number of parameters. Some traditional nondistributed learning algorithms generalize only due to the smoothness assumption, which states that if uvu \approx v, then the target function ff to be learned has the property that f(u)f(v)f(u) \approx f(v) in general, which is useful, but suffers from the curse of dimensionality.

0

1

Updated 2021-07-15

References


Tags

Data Science