The above problem is represented graphically in figure 2.16 to indicate that there is a bounded optimal
solution even though the solution space is unbounded.
Multiple or Alternative optimal Solutions
In some of the linear programming problems we face a situation that the final basic solution to the problem
need not be only one, but there may be alternative or infinite basic solutions, i.e., with different product mixes, we
have the same value of the objective function line (namely the profit). This case occurs when the objective function
line is parallel to a binding constraint line. Then the objective function takes the same optimal value at more than
one basic solution. These are called alternative basic solutions. Any weighted average of the basic optimal solutions
should also yield an alternative non-basic feasible solution, which implies that the problem will have multiple or
infinite number of solutions without further change in the objective function. This is illustrated in the following
Operations Research (MTH601)
With graphical approach as in figure 2.17 it is evident that x = 20, y = 60 and x = 40, y = 30 are both basic
feasible solutions and Z* = 180, Since the two straight lines representing the objective function and the third
constraint are parallel, we observe that as the line of objective function is moved parallel in the solution space in
order to maximize the value, this objective line will coincide with the line representing the third constraint. This
indicates that not only the two points (20, 60) and (40, 30) are the only basic solutions to the linear programming
problem, but all points in the entire line segment between these two extreme points are basic optimal solution.
Indeed we have infinite points and hence multiple non-basic feasible solution to the linear programming problem.
The same problem is now approached through simplex method to see how the simplex method provides a
clue for the existence of other optimal solutions. We introduce slack variables to convert the problem into a standard
form, which is presented below:
The above equations are conveniently set down in the initial table and further iterations are carried out as shown in
First Iteration: x enters and S2 leaves the basis.
Divide the equation 3 by to make key No. 1
Operations Research (MTH601)
Second Iteration: y enters and S3 leaves the basis.
With this iteration we reach the optimal solution which is
x = 40, y = 3o, S2 = 30, S1 = S3 = 0, Z* = 180
Now what is the clue that the problem has other optimal solutions?
We note from the second iteration that one of the non-basic variables, S1, has a zero coefficient in the
current objective row, representing Z = 180 + 0 S1 - S3. Now as the non-basic variable S1 is increased, the value of Z
neither increases nor decreases so that the corresponding basic feasible solution should also be optimal.
We can find another optimal basic feasible solution by bringing the non-basic variable S1 into the basis and
th variable S2 leaves the basis. We have another iteration performed as shown below.
Third iteration: S1 enters and S2 leaves the basis.
Hence x = 20, y = 60, S1 = 20, S2 = S3 = 0 is another optimal basic feasible solution. Still another iteration
can also be performed, as the current objective row has a zero coefficient for a non-basic variable. So we conclude
that we have only two optimal basic feasible solutions for the problem. However, any weighted average of optimal
solutions must also be optimal non-basic feasible solutions. Indeed there are infinite numbers of such solutions,
corresponding to the points on the line segment between the two optimal extreme points.
Operations Research (MTH601)
The same idea can also be checked by counting the number of zero coefficients in the objective row in
the optimal table. In all linear programming problems employing simplex method, there will be as many zero
coefficients in the objective row as there are basic variables. But if we find that the optimal table contains more zero
coefficients in the objective row than the number of variables in the basis, this is a clear indication that there will be
yet another optimal basic feasible solution.
Non-existing feasible solution
This happens when there is no point in the solution space satisfying all the constraints. In this case the
constraints may be contradictory or there may be inconsistencies among the constraints. Thus the feasible solution
space is empty and the problem has no feasible graphical approach and then by the simplex method.
Example (no feasible solution)
Introduce slack variables and artificial variable (for the constraint of the type > ).
Since artificial variable is introduced in the last constraint, a penalty of + MA is added to the L.H.S of the
objective row. Hence the objective function equation becomes,
Z - 3x - 4y + MA = 0
Again the objective function should not contain the coefficients of basic variable. Thus we multiply the last
constraint with (-M) and add to the above equation. Thus we have,
Z - 3x - Mx - 4y - 2My + MS2 = -12M
as the objective function equation.
First Iteration: y enters and S1 leaves the basis.
Operations Research (MTH601)
The last iteration reveals that we cannot further proceed with maximization as there is no negative
coefficient in the objective function row, but the final solution has the artificial variable in the basis with value at a
positive level (equal to 4). This is the indication that the second constraint is violated and hence the problem has no
Therefore a linear programming problem has no feasible optimal solution if an artificial variable appears in
the basis in the optimal table.
Thus far we have made the restrictions for the decision variables to be non-negative in most of the practical
or real life problems. But there may be situations in which this is not always the case. However, the simplex method
assumes non-negative variables. But if the problem involves variables with unrestricting in sign (the variables can
take positive or negative or zero value), the problem can be converted into an equivalent one involving only non-
A variable unrestricted in sign can always be expressed as the difference of two non-negative variables. If x
is the variable unrestricted in sign, the same can be replaced with two other variables say m and n which are non-
and since m and n are non-negative, the value of x will be positive if m n, negative if m n and zero if m = n. Thus
the values of m and n, which are non-negative, will decide the fate of the variable x. Since the simplex method
examines the basic feasible solutions (extreme points), it will always have atleast one of these two no-negative
To illustrate we consider the following example.
Note that only y is non-negative and no mention is made about x. Hence x has to be treated as a
variable unrestricted in sign. It can take any value positive, negative or zero.
Then we have an equivalent problem by replacing x by (m - n),
Operations Research (MTH601)
Introduce slack variables and express the problem in the standard form. We have
First Iteration: y enters and S2 leaves the basis.
Second Iteration: m enters and S3 leaves the basis.
Therefore the solution to the original problem will be
Operations Research (MTH601)
If the original problem has more than one variable which is unrestricted in sign, then the procedure systematized by
replacing each such unrestricted variable xi by xj = mj - n where mj > 0 and n > 0, as before, but n is the same
Solve the Following Problems by the Simplex
Use the simplex method to demonstrate that
the following problem has an unbounded optimal
How do you identify multiple solutions in a
linear programming problem using simplex
Operations Research (MTH601)
For every linear programming problem, there is an associated linear programming problem and if the
former problem is called the primal linear programming problem the latter is called its dual and vice versa. The
concept of duality was also developed along with the linear programming discovery. It has many important
ramifications. The two problems may appear to have only superficial relationship between each other, but they
possess very intimately related properties and useful one, so that the optimal solution of one problem gives complete
information about the optimal solution to the other.
We shall present later a few examples to illustrate how these relationships are useful in reducing
computational effort while solving the linear programming problems. This concept of duality is very much useful to
obtain additional information about the variations in the optimal solution when certain changes are effected in the
constraint coefficients, resource availabilities and objective function coefficients. This is termed as post-optimality
or sensitivity analysis.
The following procedure is generally adopted to convert the primal linear programming problem to its dual.
STEP 1 For each constraint in primal problem there is an associated variable in the dual problem.
STEP 2 The elements of the right hand side of the constraints in the primal problem are equal to the respective
coefficients of the objective function in the dual problem.
STEP 3 When primal problem seeks maximization as the measure of effectiveness the dual problem seeks
minimization and vice versa.
STEP 4 The maximization problem has constraints of type (< ) and the minimization problem has constraints of the
STEP 5 The variables in both the problems are non-negative (sometimes unrestricted).
STEP 6 The rows of the primal problem are changed to columns in the dual problem.
We consider the given problem as the primal linear programming problem. To convert into dual,
the following procedure is adopted.
Operations Research (MTH601)
STEP 1 We have four constraints in the primal problem. So we choose four dual variables, say y1, y2, y3 and y4.
Connect these variables with the right hand side of each constraint, respectively.
STEP 2 The primal objective function is one of maximization. Hence, the dual seeks minimization for the objective
function. The right hand side of the primal problem is associated with the respective dual variable and written as
Z = 60y1 +20y2 + 45y3 + 15y4
STEP 3 Considering the primal problem, the matrix formed with constraint coefficients as elements in the matrix is
transposed and the elements of transposed matrix will be coefficients of constraints for the dual problem. Note that
the constraints in the primal problem are of the type ( < ). If any of the constraint is of the type ( > ) in a
maximization problem convert into ( < ) inequality by multiplying the constraint throughout by (-1). In a
minimization problem convert the inequality of the type ( < ) into ( > ) by multiplying the constraint by(-1).
In the above problem we have the coefficient matrix as ⎢
The transpose of the above matrix is
STEP 4 Treat the above elements as the coefficients for the constraints of the dual and the objective coefficient of
the primal variables are taken as the right hand side for the constraints in the dual problem.
STEP 5 The dual problem seeks minimization and hence the constraints of the dual problem will have inequality of
Hence we have the dual for the problem as follows:
The Constraints having equality sign
An equality constraint in the primal problem corresponds to an unrestricted variable in sign in the dual
problem and vice versa. To illustrate consider the example.
Operations Research (MTH601)
The first constraint has the equality sign. Hence this can be replaced by two equivalent constraints taken together as
Let y1 , y1 and y2 be the dual variables corresponding to the primal problem.
) is observed in the objective function and all the constraints. This will be the case
whenever there is an equality constraint in the primal. So if we replace ( y - y ) by a new variable y which is the
difference between two non-negative variables, the variable y1 will become unrestricted in sign and the dual problem
the variable y1 will become unrestricted in sign and the dual problem is
Thus we must have the dual variable corresponding to the equality constraint unrestricted in sign.
Operations Research (MTH601)
Conversely, when a primal variable is unrestricted in sign, its dual constraint must be expressed as an equation.
The following examples reveal the conversion of the primal problem into dual.
Write the dual of the following problem.
There are two constraints. Therefore we have two dual variables y1 and y2. Then we have the dual as
Note that the objective function has coefficient 0 for x3.
Operations Research (MTH601)
Before converting the problem into dual the following observations are made regarding the primal problem.
The objective function seeks maximization. Hence the constraints must be expressed as ( < ) type.
The second constraint is of the type ( > ). It has to be converted into ( < ) bye multiplying throughout the
The first primal constraint has equality sign. Hence the corresponding dual variable will be
unrestricted in sign and the third primal variable is unrestricted and so the corresponding third dual
constraint will be an equation.
Write the dual to the following problems, solve the dual and hence find the solution to the primal problem
The dual for the given problem can be easily written as
(Note that it is easier to solve the dual problem than the primal since the primal problem involves the use of artificial
variable technique and it may lead to tedious calculations.)
Introducing the slack variable for the constraints, we have
Operations Research (MTH601)
First Iteration: y2 enters and S2 leaves the basis.
Second Iteration: y1 enters and S3 leaves the basis.
The optimal solution to the dual problem as seen from the last table is
Z* = 21; y1 = 3, y2 = 3, S1 = 1, S2 = S3 = 0
The solution to the primal problem can be obtained from the optimal table of the dual problem by
identifying the coefficients of those variables in the objective row of the optimal table which have been taken as the
basic variables in the starting table of the dual problem.
In the above example, the variables, S1, S2 and S3 have been chosen to form the basis in the starting table of
the dual. The corresponding coefficients of the same variables S1, S2 and S3 in the objective row of the optimal table
are 0, 3 and 2 respectively and these are the optimal values of the variables in the primal problem namely,
Note that the objective function has the same value in both primal and dual problems.
Operations Research (MTH601)
The primal optimal solution is
Z* = 21, x1 = 0, x2 = 3 and x3 = 2.
Economic Interpretation of the Dual
The duality concept has a natural economic interpretation, which is useful for economic analysis. We know
that the linear programming problem is mainly concerned with the allocation of scarce resources among many
activities in competition.
xj is the level of activity j,
cj is the unit profit from activity expressed in rupees,
bi is the amount of resource i available,
aij is the amount of resource i consumed by each unit of activity j,
then consider a constraint from the corresponding dual problem.
a1j y1 + a2jy2 + ... + amj ym > cj
We have from the primal problem, the unit of Cj is in rupees per unit of activity j and the units of aij are the units of
resource i per unit of activity j. This implies that yi must have units of rupees per unit of resource i. In other words, yi
may be interpreted as the unit prices of resource i in the sense of implicit or imputed value of the resource to the
can also be interpreted as the imputed cost of operating activity j at unit level.
∑ aij yi ≥ c j suggests that this imputed cost should not be less than the profit, which could
be achieved from the activity. The dual objective
indicates that the prices of the resources should be set so as to minimize their total cost. The optimal value of the
dual variables, yi* (i = 1, 2, ..., m) then represents the incremental value, or shadow theorem Zx* = Zy* implies that
the total cost (implicit value) of the resources equals the total profit from the activites using them in an optimal
It is interesting to note that the interpretation of yi* developed above is the rate at which the profit would
change (increase or decrease) when the amount of resource i is changed (increased or decreased) over a certain
range of bi over which the original optimal basis is not changed. In fact y1* does represent the marginal value of
From the theorems of duality we understand that solving either primal or the dual problem
automatically solves the other problem. So we can apply simplex method to the problem requiring less
computational effort, if we want to identify the optimal primal solution. The time consumed in a computer in using
the simplex method is quite sensitive to the number of constraints. We also know that the number of original
Operations Research (MTH601)
variables in the primal problem is equal to the number of constraints in the dual problem whenever the original
form of primal problem has more constraints than variables (m n).
(b) Another important application of concept of duality is to the analysis of the problem if the parameters
are changed after the original optimal solution has been obtained. This is normally dealt in the topic 'duality and post
(c) In duality analysis, we know that the value of the objective function for any feasible solution of the
dual problem provides an upper bound to the optimal value of the objective function for the primal problem. This
concept is useful to get an idea of how well or how much better one can do by expending the effort needed to obtain
To illustrate, consider the following primal linear programming problem.
The dual of the above problem can be written as
If we assume feasible values of y1 = 3, y2 = 4 and y3 = 0, the objective function value Zy = 3y1 + 4y2 + 5y3 =
25. So, without further investigation, it will be clear that Zx = 25. In fact the maximum value Zx* = 17 as obtained by