**Definition**: The gradient of a function $ f(x, y, z) $ is defined by

$ \bigtriangledown f(x, y, z) = f_x(x, y, z) \vec{i}+f_y(x, y, z) \vec{j}+f_z(x, y, z) \vec{k} $,

where $ f_x(x, y, z) $, $ f_y(x, y, z) $, and $ f_z(x, y, z) $ are the partial derivatives of $ f $ with respect to $ x $, $ y $, and $ z $, respectively.

--------------------

**Theorem**: (Lagrange Multipliers) Let $ f(x, y, z) $ and $ g(x, y, z) $ be functions with continuous first partial derivatives on some open set containing the constraint surface $ g(x, y, z) = 0 $ and $ \bigtriangledown g \neq 0 $ at any point on this surface. If $ f $ has a relative extremum, then it occurs at a point $ (x_0, y_0, z_0) $ on the contraint surface such that there exists some constant $ \lambda $ satisfying

$ \bigtriangledown f(x_0, y_0, z_0) = \lambda \bigtriangledown g(x_0, y_0, z_0) $.

--------------------

**Problem**: (1999 Poland) Let $ a,b,c $ be positive reals satisfying $ a^2+b^2+c^2 = 1 $. Prove that

$ a+b+c+\frac{1}{abc} \ge 4\sqrt{3} $.

**Solution**: Several approaches work, but I'll do it with Lagrange Multipliers to illustrate their usefulness. We let $ g(a,b,c) = a^2+b^2+c^2-1 $ and $ f(a,b,c) = a+b+c+\frac{1}{abc} $. Then

$ \bigtriangledown g = 2a\vec{i}+2b\vec{j}+2c\vec{k} $

and

$ \bigtriangledown f = \left(1-\frac{1}{a^2bc}\right)\vec{i} + \left(1-\frac{1}{ab^2c}\right)\vec{j} + \left(1-\frac{1}{abc^2}\right)\vec{k} $.

If $ \lambda $ is a constant such that $ \bigtriangledown f = \lambda \bigtriangledown g $, then we have

$ \left(1-\frac{1}{a^2bc}\right)\vec{i} + \left(1-\frac{1}{ab^2c}\right)\vec{j} + \left(1-\frac{1}{abc^2}\right)\vec{k} = \lambda (2a\vec{i}+2b\vec{j}+2c\vec{k}) $.

Looking at the $ \vec{i} $ and $ \vec{j} $ components, we obtain

$ \lambda = \frac{1-\frac{1}{a^2bc}}{2a} = \frac{1-\frac{1}{ab^2c}}{2b} $ so then $ a^2b^3c-b^2 = a^3b^2c-a^2 $ after multiplying through.

This factors as

$ (a-b)(a^2b^2c-a-b) = 0 $,

so $ a = b $ or $ c = \frac{a+b}{a^2b^2} $. In the latter case, however, since $ a, b \le 1 $ we know $ a \ge a^2b^2 $ and $ b \ge a^2b^2 $ so $ \frac{a+b}{a^2b^2} \ge 2 $, which is impossible for $ c $. Hence we must have $ a = b $.

Similarly, we obtain $ b = c $, so $ a = b = c = \frac{1}{\sqrt{3}} $ is a relative extremum. Since this is the only extremum with all positive values, we can simply check any other positive case to verify that it is a minimum.

Hence the minimum value of $ f $ is $ f\left(\frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}\right) = 4\sqrt{3} $, as desired. QED.

--------------------

Comment: Lagrange Multipliers can usually get pretty ugly, but they can also be very effective in certain situations. For people without knowledge of classical inequalities, this is probably the way to go in optimization problems.

--------------------

Practice Problem: Find the minimum and maximum of $ f(x, y, z) = x^4+y^4+z^4 $ satisfying $ x+y+z = 1 $.

that is a creative use of lagrange mulitpliers, i wonder if there is another solution ...

ReplyDeleteI've found that Lagrange Multipliers have limited use in odd problems that specify positive reals. If the function turns out to be convex you'll have min/max at the endpoints of the domain, not at local extrema. =/

ReplyDeleteAs for the other problem, grad(f) = 4x^3 i + 4y^3 j + 4z^3 k

grad(g) = i + j + k

so minimum occurs at x = y = z (this is pretty obvious with Power Mean though)

And maximum is unbounded. Let x, y approach infinity.

Well you can't expect to kill everything with them... just keep them handy. I got this out of a calculus textbook, btw.

ReplyDeleteyea, some IMO ineqs have quick solutions with lagrange

ReplyDeletespecific example being IMO 1984