Learn Before
Concept
Generalization Due to Shared Attributes
If one associates certain pure symbols with a meaningful distributed representation, many of the things that can be said about one symbol can generalize to the other, and vice versa. Neural language models that operate on distributed representations of words generalize better than other models that operate directly on symbolic representations of words. Distributed representations induce a rich similarity space, in which semantically close concepts/inputs are close in distance, a property that is missing from purely symbolic representations.
0
2
Updated 2021-07-15
Tags
Data Science