Loop fission and fusion
Loop fission is a compiler optimization in which a loop is broken into multiple loops over the same index range with each taking only a part of the original loop's body. The goal is to break down a large loop body into smaller ones to achieve better utilization of locality of reference. This optimization is most efficient in multi-core processors that can split a task into multiple tasks for each processor.
Conversely, loop fusion is a compiler optimization and loop transformation which replaces multiple loops with a single one. Loop fusion does not always improve run-time speed. On some architectures, two loops may actually perform better than one loop because, for example, there is increased data locality within each loop. One of the main benefits of loop fusion is that it allows temporary allocations to be avoided, which can lead to huge performance gains in numerical computing languages such as Julia when doing elementwise operations on arrays.
Other benefits of loop fusion are that it avoids the overhead of the loop control structures, and also that it allows the loop body to be parallelized by the processor by taking advantage of instruction-level parallelism. This is possible when there are no data dependencies between the bodies of the two loops. If loop fusion is able to remove redundant allocations, performance increases can be large. Otherwise, there is a more complex trade-off between data locality, instruction-level parallelism, and loop overhead that may make loop fusion, loop fission, or neither, the preferable optimization.
Fission
Example in C
int a;
int b;
for
is equivalent to:
int a;
int b;
for
for
Fusion
Example in C++ and MATLAB
Consider the following MATLAB code:x = 0:999; % Create an array of numbers from 0 to 999
y = sin + 4; % Take the sine of x and add 4 to each element
import
import std;
using std::unique_ptr;
class FloatArray ;
int main
However, the above example unnecessarily allocates a temporary array for the result of
sin. A more efficient implementation would allocate a single array for y, and compute y in a single loop. To optimize this, a C++ compiler would need to:- Inline the
sinandoperator+function calls. - Fuse the loops into a single loop.
- Remove the unused stores into the temporary arrays.
- Remove the unused allocation and free.
malloc and free have global side effects, since some compilers hardcode symbols such as malloc and free so that they can remove unused allocations from the code. However, as of clang 12.0.0 and gcc 11.1, this loop fusion and redundant allocation removal does not occur - even on the highest optimization level.Some languages specifically targeted towards numerical computing such as Julia might have the concept of loop fusion built into it at a high level, where the compiler will notice adjacent elementwise operations and fuse them into a single loop. Currently, to achieve the same syntax in general purpose languages like C++, the
sin and operator+ functions must pessimistically allocate arrays to store their results, since they do not know what context they will be called from. This issue can be avoided in C++ by using a different syntax that does not rely on the compiler to remove unnecessary temporary allocations.