Floyd–Warshall algorithm

Source: Wikipedia, the free encyclopedia.
Floyd–Warshall algorithm
Class
Graph
Worst-case performance
Best-case performance
Average performance
Worst-case space complexity

In

weighted graph with positive or negative edge weights (but with no negative cycles).[1][2] A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure
of a relation , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph.

History and naming

The Floyd–Warshall algorithm is an example of dynamic programming, and was published in its currently recognized form by Robert Floyd in 1962.[3] However, it is essentially the same as algorithms previously published by Bernard Roy in 1959[4] and also by Stephen Warshall in 1962[5] for finding the transitive closure of a graph,[6] and is closely related to Kleene's algorithm (published in 1956) for converting a deterministic finite automaton into a regular expression.[7] The modern formulation of the algorithm as three nested for-loops was first described by Peter Ingerman, also in 1962.[8]

Algorithm

The Floyd–Warshall algorithm compares many possible paths through the graph between each pair of vertices. It is guaranteed to find all shortest paths and is able to do this with comparisons in a graph, even though there may be edges in the graph. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal.

Consider a graph with vertices numbered 1 through . Further consider a function that returns the length of the shortest possible path (if one exists) from to using vertices only from the set as intermediate points along the way. Now, given this function, our goal is to find the length of the shortest path from each to each using any vertex in . By definition, this is the value , which we will find recursively.

Observe that must be less than or equal to : we have more flexibility if we are allowed to use the vertex . If is in fact less than , then there must be a path from to using the vertices that is shorter than any such path that does not use the vertex . Since there are no negative cycles this path can be decomposed as:

(1) a path from to that uses the vertices , followed by
(2) a path from to that uses the vertices .

And of course, these must be the shortest such paths, otherwise we could further decrease the length. In other words, we have arrived at the recursive formula:

.

Meanwhile, the base case is given by

where denotes the weight of the edge from to if one exists and ∞ (infinity) otherwise.

These formulas are the heart of the Floyd–Warshall algorithm. The algorithm works by first computing for all pairs for , then , then , and so on. This process continues until , and we have found the shortest path for all pairs using any intermediate vertices. Pseudocode for this basic version follows.

Pseudocode

let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity)
for each edge (u, v) do
    dist[u][v] ← w(u, v)  // The weight of the edge (u, v)
for each vertex v do
    dist[v][v] ← 0
for k from 1 to |V|
    for i from 1 to |V|
        for j from 1 to |V|
            if dist[i][j] > dist[i][k] + dist[k][j] 
                dist[i][j] ← dist[i][k] + dist[k][j]
            end if

Example

The algorithm above is executed on the graph on the left below:

Prior to the first recursion of the outer loop, labeled k = 0 above, the only known paths correspond to the single edges in the graph. At k = 1, paths that go through the vertex 1 are found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer edges but is longer (in terms of weight). At k = 2, paths going through the vertices {1,2} are found. The red and blue boxes show how the path [4,2,1,3] is assembled from the two known paths [4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection. The path [4,2,3] is not considered, because [2,1,3] is the shortest path encountered so far from 2 to 3. At k = 3, paths going through the vertices {1,2,3} are found. Finally, at k = 4, all shortest paths are found.

The distance matrix at each iteration of k, with the updated distances in bold, will be:

k = 0 j
1 2 3 4
i 1 0 −2
2 4 0 3
3 0 2
4 −1 0
k = 1 j
1 2 3 4
i 1 0 −2
2 4 0 2
3 0 2
4 −1 0
k = 2 j
1 2 3 4
i 1 0 −2
2 4 0 2
3 0 2
4 3 −1 1 0
k = 3 j
1 2 3 4
i 1 0 −2 0
2 4 0 2 4
3 0 2
4 3 −1 1 0
k = 4 j
1 2 3 4
i 1 0 −1 −2 0
2 4 0 2 4
3 5 1 0 2
4 3 −1 1 0

Behavior with negative cycles

A negative cycle is a cycle whose edges sum to a negative value. There is no shortest path between any pair of vertices , which form part of a negative cycle, because path-lengths from to can be arbitrarily small (negative). For numerically meaningful output, the Floyd–Warshall algorithm assumes that there are no negative cycles. Nevertheless, if there are negative cycles, the Floyd–Warshall algorithm can be used to detect them. The intuition is as follows:

  • The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices , including where ;
  • Initially, the length of the path is zero;
  • A path can only improve upon this if it has length less than zero, i.e. denotes a negative cycle;
  • Thus, after the algorithm, will be negative if there exists a negative-length path from back to .

Hence, to detect negative cycles using the Floyd–Warshall algorithm, one can inspect the diagonal of the path matrix, and the presence of a negative number indicates that the graph contains at least one negative cycle.[9] During the execution of the algorithm, if there is a negative cycle, exponentially large numbers can appear, as large as , where is the largest absolute value of a negative edge in the graph. To avoid overflow/underflow problems one should check for negative numbers on the diagonal of the path matrix within the inner for loop of the algorithm.[10] Obviously, in an undirected graph a negative edge creates a negative cycle (i.e., a closed walk) involving its incident vertices. Considering all edges of the above example graph as undirected, e.g. the vertex sequence 4 – 2 – 4 is a cycle with weight sum −2.

Path reconstruction

The Floyd–Warshall algorithm typically only provides the lengths of the paths between all pairs of vertices. With simple modifications, it is possible to create a method to reconstruct the actual path between any two endpoint vertices. While one may be inclined to store the actual path from each vertex to each other vertex, this is not necessary, and in fact, is very costly in terms of memory. Instead, the shortest-path tree can be calculated for each node in time using memory to store each tree which allows us to efficiently reconstruct a path from any two connected vertices.

Pseudocode

Source:[11]

let dist be a  array of minimum distances initialized to  (infinity)
let prev be a  array of vertex indices initialized to null

procedure FloydWarshallWithPathReconstruction() is
    for each edge (u, v) do
        dist[u][v] ← w(u, v)  // The weight of the edge (u, v)
        prev[u][v] ← u
    for each vertex v do
        dist[v][v] ← 0
        prev[v][v] ← v
    for k from 1 to |V| do // standard Floyd-Warshall implementation
        for i from 1 to |V|
            for j from 1 to |V|
                if dist[i][j] > dist[i][k] + dist[k][j] then
                    dist[i][j] ← dist[i][k] + dist[k][j]
                    prev[i][j] ← prev[k][j]
procedure Path(u, v)
    if prev[u][v] = null then
        return []
    path ← [v]
    while uv
        v ← prev[u][v]
        path.prepend(v)
    return path

Time analysis

Let be , the number of vertices. To find all of (for all and ) from those of requires operations. Since we begin with and compute the sequence of matrices , , , , the total number of operations used is . Therefore, the complexity of the algorithm is .

Applications and generalizations

The Floyd–Warshall algorithm can be used to solve the following problems, among others:

Implementations

Implementations are available for many programming languages.

Comparison with other shortest path algorithms

For graphs with non-negative edge weights, Dijkstra's algorithm can be used to find all shortest paths from a single vertex with running time . Thus, running Dijkstra starting at each vertex takes time . Since , this yields a worst-case running time of repeated Dijkstra of . While this matches the asymptotic worst-case running time of the Floyd-Warshall algorithm, the constants involved matter quite a lot. When a graph is dense (i.e., ), the Floyd-Warshall algorithm tends to perform better in practice. When the graph is sparse (i.e., is significantly smaller than ), Dijkstra tends to dominate.

For sparse graphs with negative edges but no negative cycles, Johnson's algorithm can be used, with the same asymptotic running time as the repeated Dijkstra approach.

There are also known algorithms using

fast matrix multiplication to speed up all-pairs shortest path computation in dense graphs, but these typically make extra assumptions on the edge weights (such as requiring them to be small integers).[15][16]
In addition, because of the high constant factors in their running time, they would only provide a speedup over the Floyd–Warshall algorithm for very large graphs.

References

External links