Criss-cross algorithm

Source: Wikipedia, the free encyclopedia.

A three-dimensional cube
The criss-cross algorithm visits all 8 corners of the Klee–Minty cube in the worst case. It visits 3 additional corners on average. The Klee–Minty cube is a perturbation of the cube shown here.

In

objective functions; there are criss-cross algorithms for linear-fractional programming problems,[1][2] quadratic-programming problems, and linear complementarity problems.[3]

Like the simplex algorithm of George B. Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube (after Victor Klee and George J. Minty), in the worst case.[4][5] However, when it is started at a random corner, the criss-cross algorithm on average visits only D additional corners.[6][7][8] Thus, for the three-dimensional cube, the algorithm visits all 8 corners in the worst case and exactly 3 additional corners on average.

History

The criss-cross algorithm was published independently by

Tamas Terlaky[9] and by Zhe-Min Wang;[10] related algorithms appeared in unpublished reports by other authors.[3]

Comparison with the simplex algorithm for linear optimization

In its second phase, the simplex algorithm crawls along the edges of the polytope until it finally reaches an optimum vertex. The criss-cross algorithm considers bases that are not associated with vertices, so that some iterates can be in the interior of the feasible region, like interior-point algorithms; the criss-cross algorithm can also have infeasible iterates outside the feasible region.

In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the simplex algorithm. The simplex algorithm first finds a (primal-) feasible basis by solving a "phase-one problem"; in "phase two", the simplex algorithm pivots between a sequence of basic feasible solutions so that the objective function is non-decreasing with each pivot, terminating with an optimal solution (also finally finding a "dual feasible" solution).[3][11]

The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one phase. Its pivoting rules are similar to the

lemma of Farkas.[14]

While most simplex variants are monotonic in the objective (strictly in the non-degenerate case), most variants of the criss-cross algorithm lack a monotone merit function which can be a disadvantage in practice.

Description

The criss-cross algorithm works on a standard pivot tableau (or on-the-fly calculated parts of a tableau, if implemented like the revised simplex method). In a general step, if the tableau is primal or dual infeasible, it selects one of the infeasible rows / columns as the pivot row / column using an index selection rule. An important property is that the selection is made on the union of the infeasible indices and the standard version of the algorithm does not distinguish column and row indices (that is, the column indices basic in the rows). If a row is selected then the algorithm uses the index selection rule to identify a position to a dual type pivot, while if a column is selected then it uses the index selection rule to find a row position and carries out a primal type pivot.

Computational complexity: Worst and average cases

The

multivariate polynomials
). Because exponential functions eventually grow much faster than polynomial functions, an exponential complexity implies that an algorithm has slow performance on large problems.

Several algorithms for linear programming—

on average
). The ellipsoidal and projective algorithms were published before the criss-cross algorithm.

However, like the simplex algorithm of Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Terlaky's criss-cross algorithm visits all the 2D corners of a (perturbed) cube in dimension D, according to a paper of Roos; Roos's paper modifies the Klee–Minty construction of a cube on which the simplex algorithm takes 2D steps.[3][4][5] Like the simplex algorithm, the criss-cross algorithm visits all 8 corners of the three-dimensional cube in the worst case.

When it is initialized at a random corner of the cube, the criss-cross algorithm visits only D additional corners, however, according to a 1994 paper by Fukuda and Namiki.[6][7] Trivially, the simplex algorithm takes on average D steps for a cube.[8][15] Like the simplex algorithm, the criss-cross algorithm visits exactly 3 additional corners of the three-dimensional cube on average.

Variants

The criss-cross algorithm has been extended to solve more general problems than linear programming problems.

Other optimization problems with linear constraints

There are variants of the criss-cross algorithm for linear programming, for

principal minors are each positive.[18][19][20] The criss-cross algorithm has been adapted also for linear-fractional programming.[1][2]

Vertex enumeration

The criss-cross algorithm was used in an algorithm for

O(nDv) and O(nD) space.[22]

Oriented matroids

The max-flow min-cut theorem states that the maximum flow through a network is exactly the capacity of its minimum cut. This theorem can be proved using the criss-cross algorithm for oriented matroids.

The criss-cross algorithm is often studied using the theory of oriented matroids (OMs), which is a combinatorial abstraction of linear-optimization theory.[17][23] Indeed, Bland's pivoting rule was based on his previous papers on oriented-matroid theory. However, Bland's rule exhibits cycling on some oriented-matroid linear-programming problems.[17] The first purely combinatorial algorithm for linear programming was devised by Michael J. Todd.[17][24] Todd's algorithm was developed not only for linear-programming in the setting of oriented matroids, but also for quadratic-programming problems and linear-complementarity problems.[17][24] Todd's algorithm is complicated even to state, unfortunately, and its finite-convergence proofs are somewhat complicated.[17]

The criss-cross algorithm and its proof of finite termination can be simply stated and readily extend the setting of oriented matroids. The algorithm can be further simplified for linear feasibility problems, that is for linear systems with nonnegative variables; these problems can be formulated for oriented matroids.[14] The criss-cross algorithm has been adapted for problems that are more complicated than linear programming: There are oriented-matroid variants also for the quadratic-programming problem and for the linear-complementarity problem.[3][16][17]

Summary

The criss-cross algorithm is a simply stated algorithm for linear programming. It was the second fully combinatorial algorithm for linear programming. The partially combinatorial simplex algorithm of Bland cycles on some (nonrealizable) oriented matroids. The first fully combinatorial algorithm was published by Todd, and it is also like the simplex algorithm in that it preserves feasibility after the first feasible basis is generated; however, Todd's rule is complicated. The criss-cross algorithm is not a simplex-like algorithm, because it need not maintain feasibility. The criss-cross algorithm does not have polynomial time-complexity, however.

Researchers have extended the criss-cross algorithm for many optimization-problems, including linear-fractional programming. The criss-cross algorithm can solve quadratic programming problems and linear complementarity problems, even in the setting of oriented matroids. Even when generalized, the criss-cross algorithm remains simply stated.

See also

  • Jack Edmonds (pioneer of combinatorial optimization and oriented-matroid theorist; doctoral advisor of Komei Fukuda)

Notes

  1. ^ a b Illés, Szirmai & Terlaky (1999)
  2. ^
    S2CID 62199788
    .
  3. ^ a b c d e f g Fukuda & Terlaky (1997)
  4. ^ a b Roos (1990)
  5. ^ .
  6. ^ a b c Fukuda & Terlaky (1997, p. 385)
  7. ^ a b Fukuda & Namiki (1994, p. 367)
  8. ^ .
  9. ^ Terlaky (1985) and Terlaky (1987)
  10. ^ Wang (1987)
  11. ^ a b Terlaky & Zhang (1993)
  12. ^ a b Bland, Robert G. (May 1977). "New finite pivoting rules for the simplex method". Mathematics of Operations Research. 2 (2): 103–107. .
  13. ^ Bland's rule is also related to an earlier least-index rule, which was proposed by Katta G. Murty for the linear complementarity problem, according to Fukuda & Namiki (1994).
  14. ^ a b Klafszky & Terlaky (1991)
  15. Euclidean unit sphere, as proved by Borgwardt and by Smale
    .
  16. ^ a b Fukuda & Namiki (1994)
  17. ^ .
  18. ^ .
  19. ^
    S2CID 24418835. Archived from the original
    (pdf) on 23 September 2015. Retrieved 30 August 2011.
  20. .
  21. ^ Avis & Fukuda (1992, p. 297)
  22. ^ The v vertices in a simple arrangement of n hyperplanes in D dimensions can be found in O(n2Dv) time and O(nD) space complexity.
  23. ^ The theory of oriented matroids was initiated by R. Tyrrell Rockafellar. (Rockafellar 1969):

    Rockafellar, R. T. (1969). "The elementary vectors of a subspace of (1967)" (PDF). In

    MR 0278972. PDF reprint.

    Rockafellar was influenced by the earlier studies of Albert W. Tucker and George J. Minty

    . Tucker and Minty had studied the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm.

  24. ^ .

References

External links