If you have any questions, comments, or would like to chat more about this topic, just reach out to us.
As we saw in the simplex post, at each iteration it is considered a basis in a form of a matrix \(A_B.\) This matrix is constructed with a subset of linear independent columns of the matrix \(A.\)
The construction of this matrix arises two questions:
- How can we choose linearly independent columns of \(A\) to form the matrix \(A_B\)?
- How can we assure that the vector \(x_B=A_B^{-1}b\) has nonnegative coordinates?
The simplex method requires the linear program (LP) to be in the standard form: a minimization problem with all constraints as equations, nonnegative decision variables, and nonnegative right-hand-side. To transform the LP above into the standard form we add slack variables \(x_3\) and \(x_4,\) to obtain the following:
\[
\begin{array}{rrl}
\min & 2x_1 + x_2 + 0x_3 +0x_4 \\
\text{s.t.} & x_1 - x_2 + x_3 & = 1 \\
& x_1 + 2x_2 + x_4 & = 4 \\
& x_1,x_2, x_3, x_4 &\geq 0.
\end{array}
\]
In this case,
\[
A=
\begin{bmatrix}
1 & -1 & 1 & 0 \\
1 & 2 & 0 & 1
\end{bmatrix}, \quad
b=
\begin{bmatrix}
1 \\
4
\end{bmatrix}.
\]
Notice that the rank of \(A\) is two, in other words, \(A\) has full rank, which is a necessary condition to construct the matrix \(A_B.\) Also, since \(A\) has four columns, we have six possible ways to choose the two columns that will compose the matrix \(A_B.\)
While we could analyze each of the six combinations for such a small matrix \(A,\) this approach would easily become intractable for larger matrices. Also, some of those combinations might not even meet the requirements to compose a basis. For example, some combinations might yield a matrix \(A_B\) with linearly dependent columns or an infeasible solution (with negative coordinates).
To overcome this issue, and at the same time answer the two previous questions, we consider a new problem: Since the rank of \(A\) is two, we add two new variables: \(x_5\) and \(x_6\), and set the cost vector for this new problem as \(c^T=(0, 0, 0, 0, 1, 1).\) The result is the following LP:
\[
\begin{array}{rrl}
\min & 0x_1 + 0x_2 + 0x_3 + 0x_4 + x_5 + x_6 \\
\text{s.t.} & x_1 - x_2 + x_3 + x_5 & = 1 \\
& x_1 + 2x_2 + x_4 + x_6 & = 4 \\
& x_1,x_2, x_3, x_4, x_5, x_6 &\geq 0.
\end{array}
\]
The objective is to minimize \(x_5\) and \(x_6\) because any feasible solution to this LP with \(x_5=0\) and \(x_6=0\) is a feasible solution to the original problem.
The coefficients and right-hand-side of the new LP in matrix form are as follows:
\[
A=
\begin{bmatrix}
1 & -1 & 1 & 0 & 1 & 0\\
1 & 2 & 0 & 1 & 0 & 1
\end{bmatrix}, \quad
b=
\begin{bmatrix}
1 \\
4
\end{bmatrix}.
\]
This new problem is known as Phase I, and it has an obvious basis \(A_B = [A^5 \; A^6],\) which is the identity matrix. With this choice of basis, the corresponding solution will be \(x= (0, 0, 0, 0, 1, 4).\) Notice that this solution will always be feasible by construction because the right-hand-side \(b\) is always nonnegative when the LP is in standard form.
At this point we have the following:
\[
\begin{array}{rl}
x_N & = (0, 0, 0, 0) \\
x_B & = (4, 2) \\
c_N^T & = (0, 0, 0, 0) \\
c_B^T & = (1, 1).
\end{array}
\]
If we evaluate the expression for the reduced cost we obtain:
\[-(c_N^T-c_B^TA_B^{-1}A_N) = -(2, 1, 1, 1).\]
All the entries of this vector are negative, which implies that if we increase any of the non-basic variables, then we will get a reduction in the value of the cost function. We can pick any of them. Let's pick the first entry, which corresponds to the non-basic variable \(x_1.\) This will be the entering variable to the basis.
To choose the leaving variable, we consider the expression:
\[x_B= A_B^{-1}b-A_B^{-1}A^1x_1 =
\begin{bmatrix}
1\\ 4
\end{bmatrix} -
\begin{bmatrix}
1 \\
1
\end{bmatrix}x_1.
\]
Notice that both entries of the vector \(A_B^{-1}A^1\) are positive. Now, if we increase the value of \(x_1\) from zero, which entry will become zero first? The answer to this question will tell us the leaving variable. In this case, if we increase \(x_1\) up to 1, we obtain:
\[x_B = \begin{bmatrix}
0 \\ 3
\end{bmatrix}.\]
The first entry become zero first with the minimum value of \(x_1,\) this entry corresponds to the variable \(x_5,\) so this is the leaving variable. This step is actually done with the so-called ratio test, which is to take the minimum of the ratios:
\[\dfrac{(A_B^{-1}b)_i}{(A_B^{-1}A^1)_i}, \quad \mbox{with }(A_B^{-1}A^1)_i>0.\]
After this first iteration of the simplex algorithm, we have moved from the basic feasible solution \((0, 0, 0, 0, 1, 4)\) to the basic feasible solution \((1, 0, 0, 0, 3).\)
After repeating this process two more iterations, we reach the solution of the Phase I, which is \(x=(2, 1, 0, 0, 0, 0)\). This corresponds to the feasible point\((2, 1, 0, 0)\) in the original problem, indicating that a basis to start the Simplex Algorithm on the original problem, namely the Phase II, is \(A_B = [A^1 \; A^2].\)
Initialization of the Simplex Algorithm
Author: Andrea Rujano.
Let's address these questions with an example:
\[
\begin{array}{rrl}
\min & 2x_1 + x_2 \\
\text{s.t}. & x_1 - x_2 & \leq 1 \\
& x_1 + 2x_2& \leq 4 \\
& x_1,x_2 &\geq 0.
\end{array}
\]