Maximum flow problem
In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate.
The maximum flow problem can be seen as a special case of more complex network flow problems, such as the circulation problem. The maximum value of an s-t flow is equal to the minimum capacity of an s-t cut in the network, as stated in the max-flow min-cut theorem.
History
The maximum flow problem was first formulated in 1954 by T. E. Harris and F. S. Ross as a simplified model of Soviet railway traffic flow.In 1955, Lester R. Ford, Jr. and Delbert R. Fulkerson created the first known algorithm, the Ford–Fulkerson algorithm. In their 1955 paper, Ford and Fulkerson wrote that the problem of Harris and Ross is formulated as follows :
Consider a rail network connecting two cities by way of a number of intermediate cities, where each link of the network has a number assigned to it representing its capacity. Assuming a steady state condition, find a maximal flow from one given city to the other.In their book Flows in Networks, in 1962, Ford and Fulkerson wrote:
It was posed to the authors in the spring of 1955 by T. E. Harris, who, in conjunction with General F. S. Ross, had formulated a simplified model of railway traffic flow, and pinpointed this particular problem as the central one suggested by the model .where refers to the 1955 secret report Fundamentals of a Method for Evaluating Rail net Capacities by Harris and Ross.
Over the years, various improved solutions to the maximum flow problem were discovered, notably the shortest augmenting path algorithm of Edmonds and Karp and independently Dinitz; the blocking flow algorithm of Dinitz; the push-relabel algorithm of Goldberg and Tarjan; and the binary blocking flow algorithm of Goldberg and Rao. The algorithms of Sherman and Kelner, Lee, Orecchia and Sidford, respectively, find an approximately optimal maximum flow but only work in undirected graphs.
In 2013 James B. Orlin published a paper describing an algorithm.
In 2022 Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva published an almost-linear time algorithm running in for the minimum-cost flow problem of which for the maximum flow problem is a particular case. For the single-source shortest path problem with negative weights another particular case of minimum-cost flow problem an algorithm in almost-linear time has also been reported. Both algorithms were deemed best papers at the 2022 Symposium on Foundations of Computer Science.
Definition
First we establish some notation:- Let be a flow network with being the source and the sink of respectively.
- If is a function on the edges of then its value on is denoted by or
Definition. A flow is a map that satisfies the following:
- Capacity constraint. The flow of an edge cannot exceed its capacity, in other words: for all
- Conservation of flows. The sum of the flows entering a node must equal the sum of the flows exiting that node, except for the source and the sink. Or:
Definition. The value of flow is the amount of flow passing from the source to the sink. Formally for a flow it is given by:
Definition. The maximum flow problem is to route as much flow as possible from the source to the sink, in other words find the flow with maximum value.
Note that several maximum flows may exist, and if arbitrary real values of flow are permitted, there is either exactly one maximum flow, or infinitely many, since there are infinitely many linear combinations of the base maximum flows. In other words, if we send units of flow on edge in one maximum flow, and units of flow on in another maximum flow, then for each we can send units on and route the flow on remaining edges accordingly, to obtain another maximum flow. If flow values can be any real or rational numbers, then there are infinitely many such values for each pair.
Algorithms
The following tables show the historical development of algorithms for solving the maximum flow problem. Many of the listed publications include similar tables comparing their results to earlier works.Strongly polynomial
A strongly-polynomial time algorithm has polynomial time bounds that depend only on the number of inputs, but do not depend on the magnitude of these numbers. Here, the inputs are the vertices and edges. The complexity of each algorithm is stated using big O notation.| Method | Year | Complexity | Description |
| Edmonds–Karp algorithm | 1969 | A specialization of Ford–Fulkerson, finding augmenting paths with breadth-first search. | |
| Dinic's algorithm | 1970 | Repeated phases that build a "layered" subgraph of residual graph edges belonging to shortest paths, using breadth-first search, and then find a blocking flow in time per phase. The shortest path length increases in each phase so there are at most phases. | |
| Karzanov's algorithm | 1974 | A predecessor to the push-relabel algorithm using preflows to find a blocking flow in each phase of Dinic's algorithm in time per phase. The first cubic-time flow algorithm. | |
| Cherkassky's algorithm | 1977 | Combines the blocking flow methods of Dinic and Karzanov. The first subcubic strongly polynomial time bound for sparse graphs. Remained best for some values of until KRT 1988. | |
| Malhotra, Kumar, and Maheshwari | 1978 | Not an improvement in complexity over Karzanov, but a simplification. Finds blocking flows by repeatedly finding a "reference node" of the layered graph and a flow that saturates all its incoming or outgoing edges, in time proportional to the number of nodes plus the number of saturated edges. | |
| Galil's algorithm | 1978 | Modifies Cherkasky's algorithm by replacing the method for finding flows within blocks of consecutive layers. | |
| Galil, Naamad, and | 1978 | Uses tree contraction on a breadth-first search forest of the layered graph to speed up blocking flows. The first of many algorithms, still the best polynomial exponents for a strongly polynomial algorithm. | |
| Blocking flow with link/cut trees. | 1981 | Introduces the link/cut tree data structure and uses it to find augmenting paths in layered networks in logarithmic amortized time per path. | |
| Push–relabel algorithm with link/cut trees | 1986 | The push-relabel algorithm maintains a preflow, and a height function estimating residual distance to the sink. It modifies the preflow by pushing excess to lower-height vertices and increases the height function at vertices without residual edges to lower heights, until all excess returns to the source. Link/cut trees allow pushes along paths rather than one edge at a time. | |
| Cheriyan and Hagerup | 1989 | randomized, with high probability | Push-relabel on a subgraph to which one edge is added at a time, prioritizing pushes of high excess amounts, with randomly permuted adjacency lists |
| Alon | 1989 | Derandomization of Cheriyan and Hagerup | |
| Cheriyan, Hagerup, and Mehlhorn | 1990 | Uses Alon's derandomization of Cheriyan and Hagerup with ideas related to the Method of Four Russians to speed up the search for height-reducing edges on which to push excess. | |
| King, Rao, and Tarjan | 1992 | for any | Another derandomization of Cheriyan and Hagerup. Preliminary version of King, Rao, and Tarjan 1994 with weaker bounds. |
| Phillips and Westbrook | 1993 | for any | Improved from King, Rao, and Tarjan 1992 using similar ideas. |
| King, Rao, and Tarjan | 1994 | Improved from Phillips and Westbrook using similar ideas. | |
| Orlin | 2013 | Applies a pseudopolynomial algorithm of Goldberg and Rao to a compressed network, maintained using data structures for dynamic transitive closure. Takes time, which simplifies to for, while previous bounds simplify to for. | |
| Orlin and Gong | 2021 | Based on a pseudopolynomial algorithm of Ahuja, Orlin, and Tarjan. Faster than King, Rao, and Tarjan and does not use link/cut trees, but not faster than Orlin + KRT. |
Pseudo-polynomial and weakly polynomial
In parallel to the development of strongly-polynomial flow algorithms, there has been a long line of pseudo-polynomial and weakly polynomial time bounds, whose running time depends on the magnitude of the input capacities. Here, the value refers to the largest edge capacity after rescaling all capacities to integer values. The difference between pseudo-polynomial and weakly polynomial is that a pseudo-polynomial bound may be polynomial in , but for a weakly polynomial bound it can be polynomial only in.| Method | Year | Complexity | Description |
| Ford–Fulkerson algorithm | 1956 | As long as there is an open path through the residual graph, send the minimum of the residual capacities on that path. | |
| Binary blocking flow algorithm | 1998 | ||
| Kathuria-Liu-Sidford algorithm | 2020 | Interior point methods and edge boosting using -norm flows. Builds on earlier algorithm of Madry, which achieved runtime. | |
| BLNPSSSW / BLLSSSW algorithm | 2020 | Interior point methods and dynamic maintenance of electric flows with expander decompositions. | |
| Gao-Liu-Peng algorithm | 2021 | Gao, Liu, and Peng's algorithm revolves around dynamically maintaining the augmenting electrical flows at the core of the interior point method based algorithm from . This entails designing data structures that, in limited settings, return edges with large electric energy in a graph undergoing resistance updates. | |
| Chen, Kyng, Liu, Peng, Gutenberg and Sachdeva's algorithm | 2022 | The exact complexity is not stated clearly in the paper, but appears to be | Chen, Kyng, Liu, Peng, Gutenberg and Sachdeva's algorithm solves maximum flow and minimum-cost flow in almost linear time by building the flow through a sequence of approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized time using a dynamic data structure. |
| Bernstein, Blikstad, Saranurak, Tu | 2024 | randomized algorithm when the edge capacities come from the set | The algorithm is a variant of the push-relabel algorithm by introducing the weighted variant. The paper establishes a weight function on directed and acyclic graphs, and attempts to imitate it on general graphs using directed expander hierarchies, which induce a natural vertex ordering that produces the weight function similar to that of the DAG special case. The randomization aspect comes from the difficulty in applying directed expander hierarchies to the computation of sparse cuts, which do not allow for natural dynamic updating. |