Linear Programming (revision) from 6 – 7.45 p.m. on Tuesday, 31 Jan.

 

Class notices: Quiz – 1 on Thursday, 2 Feb at 8.00 a.m., in class.

 

In case you need some more problems to work on, try the following:

 

 

 

 

You can use MATLAB to plot these functions and verify your calculations and to code your algorithms as well.

 

Notes on Linear Programming – these will be revised and enhanced as necessary

 

LP is part of self study.  You are expected to know the basic formulation, the algebraic simplex method (preferably with matrix vector notation and using simplex tableau or other schemes – any O.R. or optimization book will contain this material such as Belegundu and Chandrupatla, Deb (Appendix), Chong and Zak (chapters 15-17), S.S.Rao etc.).  A very quick summary of the important results is listed here.

 

LP in standard form

 

A linear programme (in standard form) is an optimization problem of the form

Minx cTx subject to Ax = b, x >= 0

where A is a given m x n matrix (m <= n), b is a given m-vector (the RHS vector) and c is a given n-vector (the cost vector).  The n-vector x is to be determined. 

 

Note: Maximization can be handled just as easily as minimization, and the case of inequality constraints and unrestricted variables can be transformed to problem instances of the standard form.

 

Feasible region

 

The feasible region K = {x : Ax = b, x >= 0} is a convex set in Rn, i.e. y e K, z e K implies that { a y + (1-a)z} e K for all a e [0,1].  K is a polyhedral set (could be unbounded) in Rn.  A point x e K, where the only way x = a y + (1- a)z with y,z e K, is for x=y=z, is called an extreme point of K.  These correspond to vertices of the polyhedral set.

 

An m x m invertible submatrix B of the constraint matrix A is called a basis of A.  If B-1b >= 0, then the solution x = [xB, xN] = [B-1b, 0] is called a basic feasible solution (bfs) of the LP.  The variables corresponding to xB and xN  are called basic and non-basic variables respectively.  You can verify that such basic feasible solutions account for all extreme points or vertices of the feasible region.

 

A vector d is called a direction of the constraint set K, if x e K implies that x + ld e K for all l >= 0.  Similar to extreme points, one can define extreme directions (i.e. those that cannot be written as a strict convex combination of two other directions). 

 

Basis representation

 

It can be shown than any vector x e K can be written as a convex combination of the extreme points and a non-negative combination of the feasible directions.  This means that if xi are the extreme points of K and dj are the extreme directions of K, then any x e K can be written as x = Si li xi + Sj mj dj, with Si li = 1, and li >= 0 and mj >= 0.

 

A solution to an LP, if one exists, is guaranteed to be found at one of the extreme points of K, the feasible region.  This is the rationale for the Simplex method of Linear Programming, which goes from one basic feasible solution to another, improving at every stage, till an optimum solution is found.

 

Algebraic rationale for the simplex method

 

Let B be a basis corresponding to the bfs [xB, xN].  Using the constraint equations Ax = b (i.e. BxB = NxN = b), we can write the basic variables xN in terms of the non basic variables xB as xB = B-1b – B-1NxN.

 

Substituting this in the objective function z = cTx = cBT xB + cNT xN, we get the expression z = cBT B-1b + (cNT – cBT B-1N) xN

 

The sign of the coefficients of xN tell whether the current basic is optimal or not (if any coefficient is negative, then the objective function can be decreased by increasing the value of xN from its current value of zero, and so that basis cannot be optimal).  Note that some textbooks refer to the negative of this coefficient and so the sign convention for optimality is the reverse.  These coefficients are called the reduced cost coefficients of each of the variables.  Note that the same definition applied to the basic variables gives zero as the reduced cost.

 

This also tells us how to go to a better solution.  Pick a variable whose coefficient above is negative and increase it till feasibility is retained (using the constraint equation and watching for the first basic variable to hit zero – this is implemented by a simple ratio of coefficients called the ratio test in textbooks).  This actually then gives a new bfs, and the process continues.

 

Therefore, we give an informal summary of the Simplex Method below.

 

Simplex method : for Minx cTx subject to Ax = b, x >= 0

 

 

 

 

 

Refer to a proper text-book for the details.

 

Phase 1 Simplex

 

A preliminary result is that a LP in standard form has a bfs if it is feasible.  [Note that the feasible region {(x1,x2) s.t. x2 >= 0} seen as a feasible region in R2 does not have an extreme point, but in standard form, this system does have a bfs.]

 

The Simplex method requires a starting bfs.  The following is one method to get a bfs (if one exists) for the system {Ax = b, x >= 0).

 

Formulate the LP

 

Min 1Txa

s.t.  Ax + xa = b

       x, xa >= 0, where 1 is the m-vector of all ones.  It is assumed that b >= 0, as usual.  The variables xa are called the artificial variables, one for each constraint.

 

This LP always has the bfs [x, xa] = [0,b], so the simplex method for this problem can be initiated.

 

The main result is that if the solution to this LP is with objective function value > 0, then the original LP has no feasible solution and if the solution to this LP occurs with function value = 0, then (provided the artificial variables are not in the basis, with value = 0, which can happen for a degenerate phase 1 LP).

 

The phase 1 LP can be computationally combined with the calculations of Phase 2, which is what is done in practice.

 

 

Actual implementation of the simplex method is through the revised simplex method, which generates columns corresponding to the entering variable, when required, and keeps a copy of the basis inverse through updations.

 

Termination and performance

 

The simplex method (attributed to Dantzig (1947) approx), is guaranteed to terminate in one of three ways:

The only care that has to be taken is to do with cycling in the case of degenerate solutions (where a number of bfs’s correspond to a single extreme point and the solution keeps going from one to other with no improvement).  This can be avoided by one of several anti-cycling rules.  The simplex method performs well in practice and has been able to solve large problems in practice, especially when specialized to particular problem structure and using matrix computational enhancements.

 

However, the simplex method is known to have poor worst-case performance and is not a theoretically satisfactory algorithm.  Two classes of algorithms that are theoretically attractive (polynomial time) are the Ellipsoid algorithm, proposed by Khachiyan (1979) and Interior Point algorithms, first proposed by Karmarkar (1984).  The interior point algorithm has been enhanced in several ways and now forms a viable alternative to the simplex method.  The interior point method relies on a non-linear optimization approach to LP, but with computational enhancements to do with scaling of the feasible region to achieve large step sizes, and other measures.