Skip to main content

Section 35.2 Duality in linear programming

The main differences between the standard forms of LP problems can be summarized as:

  • Constraints: Inequalities or Equations
  • Variables: Real or Nonnegative

Other differences: minimization vs maximization, inequalities \(\le \) vs \(\ge \text{,}\) are relatively minor: changing the signs of \(A, \mathbf b\text{,}\) or \(\mathbf c\) takes care of those variations. So, one can imagine four major types of LP problems: IR, IN, ER, and EN. However, “ER” type problems are not really interesting, because a system of linear equations with real variables determines a hyperplane, and the restriction of a linear function to a hyperplane is either unbounded or constant. This leaves us with three types: IR, IN, EN, the examples of which are given by (35.1.1), (35.1.2), and (35.1.3), respectively.

This section will describe how each LP problem has a dual LP problem. Specifically:

  • A problem of type IR has a dual of type EN
  • A problem of type EN has a dual of type IR
  • A problem of type IN has a dual of type IN

In general, the dual of the dual problem is the original (primal) problem. The IN-IN duality is symmetric while IR-EN duality is asymmetric.

Specifically, symmetric duality states that the dual of

\begin{equation} \mathbf c^T \mathbf x\to \max \quad \text{where } A\mathbf x \le \mathbf b, \quad \mathbf x \ge 0\label{eq-standard-form-IN-1}\tag{35.2.1} \end{equation}

is

\begin{equation} \mathbf b^T \mathbf y\to \min \quad \text{where } A^T\mathbf y \ge \mathbf c, \quad \mathbf y \ge 0\label{eq-standard-form-IN-2}\tag{35.2.2} \end{equation}

Asymmetric duality states that the dual of

\begin{equation} \mathbf c^T \mathbf x\to \max \quad \text{where } A\mathbf x \le \mathbf b\label{eq-standard-form-IR-1}\tag{35.2.3} \end{equation}

is

\begin{equation} \mathbf b^T \mathbf y\to \min \quad \text{where } A^T\mathbf y = \mathbf c, \quad \mathbf y \ge 0\label{eq-standard-form-EN-1}\tag{35.2.4} \end{equation}

Note that in both cases, the variables and constraints trade places; the dual has as many constraints as the original problem had variables (not counting the nonnegativity requirements). Also, the dual of a maximization problem is a minimization problem (and the other way around).

Strong Duality Theorem: Suppose that the primal LP problem has an optimal solution \(\mathbf x^*\text{.}\) Then the dual problem has an optimal solution \(\mathbf y^*\) where \(\mathbf c^T\mathbf x^*=\mathbf b^T\mathbf y^*\text{.}\) Thus, although these problems have different objective functions, both objective functions have the same extreme (optimal) value.

Every feasible point \(\mathbf x\) of (35.2.1) gives a lower bound on its optimal value: that is, \(\mathbf c^T \mathbf x^*\ge \mathbf c^T \mathbf x\text{.}\) On the other hand, every feasible point \(\mathbf y\) of (35.2.2) gives an upper bound on its optimal value: that is, \(\mathbf b^T\mathbf y^* \le \mathbf b^T\mathbf y\text{.}\) Since the optimal value (so far unknown) is the same for both problems, it is contained somewhere between known quantities \(\mathbf c^T \mathbf x\) and \(\mathbf b^T\mathbf y\text{.}\) This leads to the idea of moving the points \(\mathbf x, \mathbf y\) through the feasible sets so that the duality gap \(\mathbf b^T\mathbf y - \mathbf c^T \mathbf x\) is decreased. Once the gap reaches zero, we have the solution. Or, if the gap becomes extremely small, we have a point \(\mathbf x\) that is almost as good as the optimal one.

Without going into the detailed proof of the Strong Duality Theorem, here is a sketch for asymmetric duality. Suppose \(\mathbf x^*\) is an optimal point for (35.2.3). If \(\mathbf y\) is any feasible point of (35.2.4), then

\begin{equation} \mathbf c^T \mathbf x^* = (A^T \mathbf y)^T \mathbf x^* = \mathbf y^T (A\mathbf x^*) \le \mathbf y^T \mathbf b = \mathbf b^T \mathbf y\label{eq-lp-duality-sketch}\tag{35.2.5} \end{equation}

which shows that each value of the objective function of the dual problem gives an upper bound for the primal problem (this part is the “weak duality theorem”). It remains to show that there exists \(\mathbf y\) for which equality holds in (35.2.5).

Imagine applying force \(\mathbf c\) to the point \(\mathbf x^*\text{:}\) if it can be moved in this direction, the objective function will increase. Since such an increase is impossible, the force \(\mathbf c\) must be balanced by reaction forces of the “walls”, i.e., hyperplanes \((A\mathbf x)_k\le b_k\text{,}\) which represent constraints. Only the active constraints, those for which \((A\mathbf x^*)_k = b_k\text{,}\) are relevant here, since \(\mathbf x^*\) does not touch the hyperplanes of inactive constraints. The normal vectors of the constraint hyperplanes are given by the rows of \(A\text{.}\) Transposition makes \(\mathbf c\) a linear combination of the columns of \(A^T\) with nonnegative coefficients; moreover, the coefficients of inactive constraints are zero. This means that \(\mathbb c = A^T \mathbf y^*\) for some vector \(\mathbf y^*\ge 0\) such that if \((A\mathbf x^*)_k\lt b_k\) (inactive constraint) then \(y^*_k=0\text{.}\) Returning to (35.2.5) we see that with \(\mathbf y=\mathbf y^*\) it becomes an equality.