Learn Before
Non-Linear Activation Functions
Modern neural network models use non-linear activation functions. They allow the model to create complex mappings between the network’s inputs and outputs, which are essential for learning and modeling complex data, such as images, video, audio, and data sets which are non-linear or have high dimensionality.
There are 8 common non-linear activation functions:
- Sigmoid / Logistic
- TanH / Hyperbolic Tangent
- ELU (Exponential Linear Unit)
- ReLU (Rectified Linear Unit)
- Leaky ReLU
- Parametric ReLU
- Softmax
- Swish
0
2
Contributors are:
Who are from:
Tags
Data Science
Learn After
Linear vs. Non-Linear Activation Functions
Sigmoid/Logistic Function
TanH/Hyperbolic Tangent Function
Swish Function
ReLU (Rectified Linear Unit)
ELU (Exponential Linear Unit)
Which activation function is represented by each of these plots?
Which of the following introduces nonlinearity into neural networks?
Softmax Function