Automatic differentiation
In mathematics and computer algebra, automatic differentiation, also called algorithmic differentiation, computational differentiation, and differentiation arithmetic is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation is a subtle and central tool to automatize the simultaneous computation of the numerical values of arbitrarily complex functions and their derivatives with no need for the symbolic representation of the derivative, only the function rule or an algorithm thereof is required. Auto-differentiation is thus neither numeric nor symbolic, nor is it a combination of both. It is also preferable to ordinary numerical methods: In contrast to the more traditional numerical methods based on finite differences, auto-differentiation is 'in theory' exact, and in comparison to symbolic algorithms, it is computationally inexpensive.
Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations and elementary functions. By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
Difference from other differentiation methods
Automatic differentiation is distinct from symbolic differentiation and numerical differentiation.Symbolic differentiation faces the difficulty of converting a computer program into a single mathematical expression and can lead to inefficient code. Numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems.
Applications
Currently, for its efficiency and accuracy in computing first and higher order derivatives, auto-differentiation is a celebrated technique with diverse applications in scientific computing and mathematics. It should therefore come as no surprise that there are numerous computational implementations of auto-differentiation. Among these, one mentions INTLAB, Sollya, and InCLosure. In practice, there are two types of algorithmic differentiation: a forward-type and a reversed-type. Presently, the two types are highly correlated and complementary and both have a wide variety of applications in, e.g., non-linear optimization, sensitivity analysis, robotics, machine learning, computer graphics, and computer vision. Automatic differentiation is particularly important in the field of machine learning. For example, it allows one to implement backpropagation in a neural network without a manually-computed derivative.Forward and reverse accumulation
Chain rule of partial derivatives of composite functions
Fundamental to automatic differentiation is the decomposition of differentials provided by the chain rule of partial derivatives of composite functions. For the simple compositionthe chain rule gives
Two types of automatic differentiation
Usually, two distinct modes of automatic differentiation are presented.- forward accumulation
- reverse accumulation
- Forward accumulation computes the recursive relation:
- Reverse accumulation computes the recursive relation:
Which of these two types should be used depends on the sweep count. The computational complexity of one sweep is proportional to the complexity of the original code.
- Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation.
- Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation.
Forward accumulation was introduced by R. E. Wengert in 1964. According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown. Seppo Linnainmaa published reverse accumulation in 1976.
Forward accumulation
In forward accumulation AD, one first fixes the independent variable with respect to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the inner functions in the chain rule:This can be generalized to multiple variables as a matrix product of Jacobians.
Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable is augmented with its derivative ,
as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.
Using the chain rule, if has predecessors in the computational graph:
Image:ForwardAccumulationAutomaticDifferentiation.png|right|thumb|300px|Figure 2: Example of forward accumulation with computational graph
As an example, consider the function:
For clarity, the individual sub-expressions have been labeled with the variables.
The choice of the independent variable to which differentiation is performed affects the seed values and. Given interest in the derivative of this function with respect to, the seed values should be set to:
With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph.
To compute the gradient of this example function, which requires not only but also, an additional sweep is performed over the computational graph using the seed values ;.
Implementation
Pseudocode
Forward accumulation calculates the function and the derivative in one pass. The associated method call expects the expression to be derived with regard to a variable. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated.tuple
C++
- include
struct Variable;
struct Expression ;
struct Variable: public Expression ;
struct Plus: public Expression ;
struct Multiply: public Expression ;
int main
Reverse accumulation
In reverse accumulation AD, the dependent variable to be differentiated is fixed and the derivative is computed with respect to each sub-expression recursively. In a pen-and-paper calculation, the derivative of the outer functions is repeatedly substituted in the chain rule:In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar ; it is a derivative of a chosen dependent variable with respect to a subexpression :
Using the chain rule, if has successors in the computational graph:
Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list, which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization. Checkpointing is also used to save intermediary states.
Image:ReverseaccumulationAD.png|right|thumb|300px|Figure 3: Example of reverse accumulation with computational graph
The operations to compute the derivative using reverse accumulation are shown in the table below :
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function in the primal causes in the adjoint; etc.