Skip to main content

Section 34.2 Lagrange multiplier method

Suppose we want to minimize \(f\) subject to equality constraint \(c = 0\) in \(n\)-dimensional space. A point \(x^*\) of such constrained minimum must have \(\nabla f\) perpendicular to the surface \(c = 0\text{.}\) Therefore, \(\nabla f \) is parallel to \(\nabla c\text{.}\) We can express this as \(\nabla f = \lambda \nabla c\) where \(\lambda\) is the Lagrange multiplier (so far unknown). We have the system

\begin{equation*} \nabla f(\mathbf x) - \lambda \nabla c(\mathbf x) = 0, \quad c(\mathbf x) = 0 \end{equation*}

of \(n+1\) equations with \(n+1\) unknowns. The Jacobian of this system is the block matrix

\begin{equation*} \begin{pmatrix} Hf & -\nabla c \\ (\nabla c)^T & 0 \end{pmatrix} \end{equation*}

where \(Hf\) is the Hessian of \(f\text{.}\) It is feasible to solve the system using multivarible Newton's method, for example.

Minimize \(f(\mathbf x) = x_1^4 + \exp(3x_1+x_2)\) on the circle \(x_1^2 + x_2^2 = 1\) using the Lagrange multiplier method.

Solution

The code is similar to Example 32.2.1 but with more complicated gradient and Hessian. To avoid notational confusion, it uses x for the vector \((x_1, x_2)\) and v for the vector \((x_1, x_2, \lambda)\text{.}\) The functions Fg and Fh below are the gradient and Hessian of the function

\begin{equation*} F(x_1, x_2, \lambda) = x_1^4 + \exp(3x_1+x_2) - \lambda (x_1^2 + x_2^2 - 1) \end{equation*}

We equate Fg to zero vector, and use Fh according to Newton's method.

f = @(x) x(1)^4 + exp(3*x(1)+x(2));

Fg = @(v) [4*v(1)^3 + 3*exp(3*v(1) + v(2)) - v(3)*(2*v(1)); ...
    exp(3*v(1) + v(2)) - v(3)*(2*v(2)); ...
    -(v(1)^2 + v(2)^2 - 1)];
Fh = @(v) [12*v(1)^2 + 9*exp(3*v(1) + v(2)) - v(3)*2, 3*exp(3*v(1) + v(2)), -(2*v(1)); ... 
    3*exp(3*v(1) + v(2)), exp(3*v(1) + v(2)) - v(3)*2, -(2*v(2)); ...
    -2*v(1),  -2*v(2), 0];
    
v = randn(3, 1);

max_tries = 10000;

for k=1:max_tries
    dv = -Fh(v)\Fg(v);
    v = v + dv;
    if norm(dv) < 1e-6
        break
    end
end

x = v(1:2);
if k < max_tries
    fprintf('Found x = (%g, %g) with f(x) = %g after %d steps\n', x, f(x), k);
else
    disp('Failed to converge')
end

The method converges quickly and the point it finds satisfies to constraint. But this point is often a maximum rather than a minimum of the function. As usual with Newton's method, the situation can be improved by running a rough search algorithm first, just to locate a good starting point.

Matlab provides a command fmincon which implements constrained minimization with various constraints, both equality and inequality types. However, its availability may vary depending on version, and Octave does not have it built-in.