Learn Before
Human Level Performance: Based on the evidence below, which two of the following four options seem the most promising to try?
You define 0.1% as “human-level performance.” After working further on your algorithm, you end up with the following:
- Human-level performance: 0.1%
- Training set error: 2.0%
- Dev set error: 2.1%
0
2
Tags
Data Science
Related
Human Level Proxy for Bayes Error
If your goal is to have “human-level performance” be a proxy (or estimate) for Bayes error, how would you define “human-level performance”?
Human Level Performance: Based on the evidence below, which two of the following four options seem the most promising to try?
Which of the following statements about algorithmic performance do you agree with?
Why does regularization prevent overfitting?
Popular Regularization Techniques in Deep Learning
Human Level Performance: Based on the evidence below, which two of the following four options seem the most promising to try?
Local Constancy and Smoothness Priors
Parameter Sharing
Parameter Tying
L1 regularization and L2 regularization
MTL as a Regularizer
Parameter Penalties in Classical Regularization