Learn Before
Concept

Explainability

ML classifiers/systems are opaque in how they work i.e. hyperparameters are rarely understandable in the scope of the objective, and thus why these models work is difficult to explain/understand.

0

1

Updated 2020-04-26

Tags

Data Science

Related