Softplus
In mathematics and machine learning, the softplus function is
It is a smooth approximation to the ramp function, which is known as the rectifier or ReLU in machine learning. For large negative it is, so just above 0, while for large positive it is, so just above.
The names softplus and SmoothReLU are used in machine learning. The name "softplus", by analogy with the earlier softmax is presumably because it is a smooth approximation of the positive part of, which is sometimes denoted with a superscript plus,.
Alternative forms
This function can be approximated as:By making the change of variables, this is equivalent to
A sharpness parameter may be included:
Related functions
The derivative of softplus is the standard logistic function:The logistic function or the sigmoid function is a smooth approximation of the rectifier, the Heaviside step function.
LogSumExp
The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:The LogSumExp function is
and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
Convex conjugate
The convex conjugate of the softplus function is the negative binary entropy function. This is because the derivative of softplus is the logistic function, whose inverse function is the logit, which is the derivative of negative binary entropy.Softplus can be interpreted as logistic loss, so, by duality, minimizing logistic loss corresponds to maximizing entropy. This justifies the principle of maximum entropy as loss minimization.