Dantzig–Wolfe decomposition explained

Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960.[1] Many texts on linear programming have sections dedicated to discussing this decomposition algorithm.[2] [3] [4] [5] [6] [7]

Dantzig–Wolfe decomposition relies on delayed column generation for improving the tractability of large-scale linear programs. For most linear programs solved via the revised simplex algorithm, at each step, most columns (variables) are not in the basis. In such a scheme, a master problem containing at least the currently active columns (the basis) uses a subproblem or subproblems to generate columns for entry into the basis such that their inclusion improves the objective function.

Required form

In order to use Dantzig–Wolfe decomposition, the constraint matrix of the linear program must have a specific form. A set of constraints must be identified as "connecting", "coupling", or "complicating" constraints wherein many of the variables contained in the constraints have non-zero coefficients. The remaining constraints need to be grouped into independent submatrices such that if a variable has a non-zero coefficient within one submatrix, it will not have a non-zero coefficient in another submatrix. This description is visualized below:

The D matrix represents the coupling constraints and each Fi represents the independent submatrices. Note that it is possible to run the algorithm when there is only one F submatrix.

Problem reformulation

After identifying the required form, the original problem is reformulated into a master program and n subprograms. This reformulation relies on the fact that every point of a non-empty, bounded convex polyhedron can be represented as a convex combination of its extreme points.

Each column in the new master program represents a solution to one of the subproblems. The master program enforces that the coupling constraints are satisfied given the set of subproblem solutions that are currently available. The master program then requests additional solutions from the subproblem such that the overall objective to the original linear program is improved.

The algorithm

While there are several variations regarding implementation, the Dantzig–Wolfe decomposition algorithm can be briefly described as follows:

  1. Starting with a feasible solution to the reduced master program, formulate new objective functions for each subproblem such that the subproblems will offer solutions that improve the current objective of the master program.
  2. Subproblems are re-solved given their new objective functions. An optimal value for each subproblem is offered to the master program.
  3. The master program incorporates one or all of the new columns generated by the solutions to the subproblems based on those columns' respective ability to improve the original problem's objective.
  4. Master program performs x iterations of the simplex algorithm, where x is the number of columns incorporated.
  5. If objective is improved, goto step 1. Else, continue.
  6. The master program cannot be further improved by any new columns from the subproblems, thus return.

Implementation

There are examples of the implementation of Dantzig–Wolfe decomposition available in the closed source AMPL[8] and GAMS[9] mathematical modeling software. There are general, parallel, and fast implementations available as open-source software, including some provided by JuMP and the GNU Linear Programming Kit.[10]

The algorithm can be implemented such that the subproblems are solved in parallel, since their solutions are completely independent. When this is the case, there are options for the master program as to how the columns should be integrated into the master. The master may wait until each subproblem has completed and then incorporate all columns that improve the objective or it may choose a smaller subset of those columns. Another option is that the master may take only the first available column and then stop and restart all of the subproblems with new objectives based upon the incorporation of the newest column.

Another design choice for implementation involves columns that exit the basis at each iteration of the algorithm. Those columns may be retained, immediately discarded, or discarded via some policy after future iterations (for example, remove all non-basic columns every 10 iterations).

A (2001) computational evaluation of Dantzig-Wolfe in general and Dantzig-Wolfe and parallel computation is the PhD thesis by J. R. Tebboth[11]

See also

Notes and References

  1. George B. Dantzig. Philip Wolfe. Decomposition Principle for Linear Programs. Operations Research. 8. 1960. 101–111. 10.1287/opre.8.1.101.
  2. Book: Dimitris Bertsimas. John N. Tsitsiklis. Linear Optimization. Athena Scientific. 1997.
  3. Book: George B. Dantzig. Mukund N. Thapa. Linear Programming 2: Theory and Extensions. Springer. 1997.
  4. Book: Vašek Chvátal. Linear Programming. Macmillan. 1983.
  5. Book: Maros. István. Mitra. Gautam. Gautam Mitra. Simplex algorithms. 1438309. Advances in linear and integer programming. 1–46. J. E. Beasley. Oxford Science. 1996.
  6. Book: Maros, István. 1960274. Computational techniques of the simplex method. International Series in Operations Research & Management Science. 61. Kluwer Academic Publishers. Boston, MA. 2003. xx+325. 1-4020-7332-1.
  7. Book: Lasdon, Leon S.. Optimization theory for large systems. Dover Publications, Inc.. Mineola, New York. 2002. reprint of the 1970 Macmillan. xiii+523. 1888251.
  8. Web site: AMPL code repository with Dantzig–Wolfe example. December 26, 2008.
  9. .
  10. Web site: Open source Dantzig-Wolfe implementation . October 15, 2010.
  11. Book: Tebboth , James Richard . A computational study of Dantzig-Wolfe decomposition. PhD thesis. 2001. University of Buckingham, United Kingdom.