## Tuesday, May 30, 2006

### Polynomial On The Floor? Topic: Algebra/Polynomials. Level: AIME.

Problem: (2005 Putnam - B1) Find a nonzero polynomial $P(x,y)$ such that $P(\lfloor a \rfloor, \lfloor 2a \rfloor) = 0$ for all reals numbers $a$.

Solution: What's the relation between $\lfloor a \rfloor$ and $\lfloor 2a \rfloor$? We can see that either

$\lfloor 2a \rfloor = 2 \lfloor a \rfloor$

or

$\lfloor 2a \rfloor = 2 \lfloor a \rfloor + 1$

for all reals $a$ (yes, even negative ones). So it suffices to have a polynomial with zeros at $y-2x$ and $y-2x-1$, such as

$P(x,y) = (y-2x)(y-2x-1)$.

QED.

--------------------

Comment: This is about the easiest question you'll see on a Putnam test. Each question is worth 10 points (120 total) and about half of the people who take it get 0 points.

--------------------

Practice Problem: (2005 Putnam - A1) Show that every positive integer is a sum of one or more numbers of the form $2^r3^s$, where $r$ and $s$ are nonnegative integers and no summand divides another. (For example, $23=9+8+6$.)

## Sunday, May 28, 2006

### Don't Mess With The Juggernaut. Topic: Inequalities. Level: Olympiad.

Problem: Given nonnegative reals $a,b,c$ satisfying $a+b+c = 3$, find the minimum of the expression

$a^3+b^3+c^3+kabc$

for $k \ge \frac{15}{4}$.

Solution: At first glance, we're like it's just a boring classical homogeneous symmetric inequality. The minimum must occur at $a = b = c = 1$! But looking into it a little more, we find that if $a = b = \frac{3}{2}, c = 0$ we get something smaller. In fact, we will show that this is the minimum. We claim that

$a^3+b^3+c^3+kabc \ge \frac{(a+b+c)^3}{4}$.

Expanding and rearranging, this is equivalent to

$3(a^3+b^3+c^3)+(4k-6)abc \ge 3(a^2b+a^2c+b^2c+b^2a+c^2a+c^2b)$.

But wait, this form reminds us a lot of Schur! Schur says that (multiplied by a factor of $3$)

$3(a^3+b^3+c^3)+9abc \ge 3(a^2b+a^2c+b^2c+b^2a+c^2a+c^2b)$.

So it remains to show that

$(4k-15)abc \ge 0$.

But since $4k-15 \ge 0$ (from $k \ge \frac{15}{4}$) and $a,b,c$ are nonnegative, the inequality is clearly true. Hence we have

$a^3+b^3+c^3+kabc \ge \frac{(a+b+c)^3}{4} = \frac{27}{4}$

for $k \ge \frac{15}{4}$. Note that the equality conditions are met in both inequalities when $a = b = \frac{3}{2}$ and $c = 0$. QED.

--------------------

Comment: Original problem was motivated by the case $k = 8$, but easily generalized by seeing the proof. This is an interesting inequality because it doesn't have the usual equality case except when $k = \frac{15}{4}$.

--------------------

Practice Problem: Solve for $x \in \mathbb{R}$

$2x^2-3x\lfloor x-1 \rfloor+\lfloor x-1 \rfloor^2 \le 0$.

## Friday, May 26, 2006

### It's A One-To-One Tie! Topic: Linear Algebra.

Note: Added my linear algebra book to the Math Books page.

--------------------

Definition: A function $f: A \to B$ is linear if $f(x+y) = f(x)+f(y)$ and $cf(x) = f(cx)$ for $x,y \in A$ and $c \in \mathbb{R}$ (or, rather, any field $\mathbb{F}$). Furthermore, it is one-to-one if and only if $f(x) = f(y) \Rightarrow x = y$.

--------------------

Problem: Let $V$ and $W$ be vector spaces, and let $T: V \to W$ be linear. Then $T$ is one-to-one if and only if $T$ carries linearly independent subsets of $V$ onto linearly independent subsets of $W$.

Solution: First we will show that if $T$ is one-to-one, then it carries linearly independent subsets of $V$ onto linearly independent subsets of $W$.

Let $S$ be a linearly independent subset of $V$. Suppose $S = \{x_1, x_2, \ldots, x_k\}$. We want to show that $R = \{T(x_1), T(x_2), \ldots, T(x_k)\}$ is also linearly independent. Suppose that $R$ is not. Then there exist $a_1, a_2, \ldots, a_k$ not all zero such that

$a_1T(x_1)+a_2T(x_2)+\cdots+a_kT(x_k) = 0$.

Since $T$ is linear, we can rewrite this as

$T(a_1x_1)+T(a_2x_2)+\cdots+T(a_kx_k) = T(a_1x_1+a_2x_2+\cdots+a_kx_k) = 0$.

Since $T(0) = 0$ and $T$ is one-to-one, we must have

$a_1x_1+a_2x_2+\cdots+a_kx_k = 0$.

But $S$ is linearly independent, so this is only possible if $a_1 = a_2 = \cdots = a_k = 0$, a contradiction. So $R$ is linearly independent.

Now, we will prove the converse - if $T$ takes linearly independent subsets of $V$ to linearly independent subsets of $W$ then $T$ is one-to-one. Let $x, y \in V$ be nonzero such that $T(x) = T(y)$. It suffices to show that $x = y$.

If $\{x, y\}$ is linearly dependent, then $x = cy$ for some $c \in \mathbb{R}$, so

$T(x) = T(y) = T(cx) = cT(x)$.

If $T(x) = T(y) \neq 0$, we have $c = 1 \Rightarrow x = y$. In the case that $T(x) = T(y) = 0$, the set $\{x\}$ is linearly independent but $T$ takes it onto $\{0\}$, which is linearly dependent, a contradiction.

If $\{x, y\}$ is linearly independent, then $\{T(x), T(y)\}$ is also linearly independent. But since $T(x) = T(y)$, the set is clearly linearly dependent, a contradiction.

Lastly, we have $T(0) = 0$. So $T$ must be a one-to-one function, as desired. QED.

--------------------

Comment: Basically, we made extensive use of the properties of a linear transformation to prove both directions. A nice corollary to this proof is that if $T$ is one-to-one, then $S$ is linearly independent if and only if $T(S)$ is linearly independent.

--------------------

Practice Problem: Let $V$ and $W$ be vector spaces, and let $T: V \to W$ be linear. Then $T$ is one-to-one if and only if $N(T) = \{0\}$ (here, $N(T)$ is the kernal of $T$, defined by $N(T) = \{x | T(x) = 0\}$).

## Wednesday, May 24, 2006

### The Algebra Of Lines? Topic: Linear Algebra.

Note: See MathWorld for any necessary definitions.

--------------------

Problem: Let $\{v_1, v_2, \ldots, v_m\}$ be a linearly independent set of vectors in a nonzero subspace $S$ of $\mathbb{R}^n$ such that $span(v_1, v_2, \ldots, v_m) \subset S$. Then it can be extended by adding some more vectors to form a basis of $S$.

Solution: Take any vector $v_{m+1} \in S$ such that $v_{m+1}$ is not a linear combination of $\{v_1, v_2, \ldots, v_m\}$ (i.e. $v_{m+1}$ is not in the span of the original set of vectors). The existence of such a vector is guaranteed by the fact that the span is strictly contained within $S$ Since it is not a linear combination, the new set $\{v_1, v_2, \ldots, v_{m+1}\}$ is still linearly independent. Furthermore, it has a greater span than the original set. If the new span is still strictly contained within $S$, we may add another vector $v_{m+2} \in S$. Repeating this process allows us to keep increasing the span. But we also note that this process must terminate because a set of $n$ vectors that is linearly independent must span all of $\mathbb{R}^n$; therefore, after adding some vector $v_k$ with $m < k \le n$ we will have formed a basis $\{v_1, v_2, \ldots, v_k\}$ of $S$. QED.

--------------------

Comment: This is actually a lemma that leads to one of the most basic nontrivial results in Linear Algebra, that every subspace $S$ of $\mathbb{R}^n$ in fact has a basis. The idea is trivial in $\mathbb{R}^3$, for example, because a single vector is the basis for any line through the origin and two vectors are the basis for any plane through the origin. It is known that lines and planes through the origin (as well as $0$ and $\mathbb{R}^3$ itself) are the only subspaces in $\mathbb{R}^3$.

--------------------

Practice Problem: Any two bases of a subspace $S$ of $\mathbb{R}^n$ have the same number of elements (this number is called the dimension of $S$).

## Tuesday, May 23, 2006

### Productive! Topic: Algebra/NT. Level: AIME.

Problem: (2006 Bellevue BATH Team) Evaluate $\displaystyle \sum_{a=1}^{10} \sum_{b=1}^{10} \sum_{c=1}^{10} \sum_{d=1}^{10} dcab$.

Solution: Note that we can write the sum as

$\displaystyle \sum_{b=1}^{10} \sum_{c=1}^{10} \sum_{d=1}^{10} (1 \cdot bcd+ 2 \cdot bcd + \cdots + 10 \cdot bcd) = \sum_{b=1}^{10} \sum_{c=1}^{10} \sum_{d=1}^{10} (1+2+\cdots+10)bcd$.

Using the same idea, we can write it as

$(1+2+\cdots+10)(1+2+\cdots+10)(1+2+\cdots+10)(1+2+\cdots+10) = 55^4$.

QED.

--------------------

Comment: Evaluating multiple summations this way is very effective. Other examples include factoring the harmonic series

$\displaystyle 1+\frac{1}{2}+\frac{1}{3}+\cdots = \left(1+\frac{1}{2}+\frac{1}{2^2}+\cdots\right) \left(1+\frac{1}{3}+\frac{1}{3^2}+\cdots\right) \left(1+\frac{1}{5}+\frac{1}{5^2}+\cdots\right) = \prod \sum_{i=0}^\infty \frac{1}{p^i}$

or, more generally, any integer value of the Riemann Zeta Function

$\displaystyle \zeta(s) = 1^s+\frac{1}{2^s}+\frac{1}{3^s}+\cdots = \prod \sum_{i=0}^\infty \frac{1}{p^is}$.

Furthermore, since

$\displaystyle \sum_{i=0}^\infty \frac{1}{p^is}$

is an infinite geometric series of common ratio $\frac{1}{p^s}$, we can say that

$\zeta(s) = \prod \frac{1}{p^s-1}$,

where the product is taken over all primes $p$.

## Sunday, May 21, 2006

### Up, Up, And Away! Topic: Algebra/Calculus. Level: AIME.

Problem: (Problem-Solving Through Problems - 6.9.12) Suppose that $f$ is differentiable, and that $f^{\prime}(x)$ is strictly increasing for $x \ge 0$. If $f(0) = 0$, prove that $\frac{f(x)}{x}$ is strictly increasing for $x > 0$.

Solution: Let $g(x)$ be the average value of $f^{\prime}(x)$ over the interval $[0, x]$. We have

$\displaystyle g(x) = \frac{1}{x} \int_0^x f^{\prime}(t) dt = \frac{f(x)}{x}$.

But since $f^{\prime}(x)$ is strictly increasing, as $x$ increases we are adding values of $f$ that are larger than the average, so the average will consequently be strictly increasing as well. Hence $g(x) = \frac{f(x)}{x}$ is strictly increasing for $x > 0$, as desired. QED.

--------------------

Comment: Noticing the average value function was a nice way to solve this problem and made it "intuitively" clear. We took advantage of the fact that the integral of the derivative is the function itself (the Fundamental Theorem of Calculus).

--------------------

Practice Problem: (Problem-Solving Through Problems - 6.9.8) Let $f: [0,1] \to (0,1)$ be continuous. Show that the equation

$\displaystyle 2x-\int_0^x f(t)dt = 1$

has exactly one solution in the interval $[0, 1]$.

## Saturday, May 20, 2006

### 2006 Bellevue BATH Today!

I'll have the competition problems up after it's over.

EDIT: They're up.

## Tuesday, May 16, 2006

### See What The Integrator Has To Say. Topic: Calculus/Trigonometry. Level: Olympiad.

Problem: Evaluate $\displaystyle \int_0^\pi \frac{x\sin{x}}{1+\cos^2{x}} dx$.

Solution: Now this does not have a straightforward integrate it solution, as The Integrator will show you. So we think... symmetry.

Note that $\sin{x} = \sin{(\pi-x)}$ and $\cos^2{x} = \cos^2{(\pi-x)}$. So that means

$\displaystyle \int_0^\pi \frac{x\sin{x}}{1+\cos^2{x}} dx = \frac{1}{2} \int_0^\pi \frac{(x+(\pi-x)) \sin{x}}{1+\cos^2{x}} dx = \frac{\pi}{2} \int_0^\pi \frac{\sin{x}}{1+\cos^2{x}} dx$.

But this integral is much more friendly, and we can easily make the substitution $u = \cos{x}$ so $du = -\sin{x} dx$. Our integral becomes

$\displaystyle -\frac{\pi}{2} \int_1^{-1} \frac{du}{1+u^2} = -\frac{\pi}{2} [\arctan{u}]_1^{-1} = -\frac{\pi}{2} \left(-\frac{\pi}{4}-\frac{\pi}{4}\right) = \frac{\pi^2}{4}$.

--------------------

Comment: Here we see the power of symmetry in integration - there's no clear way to evaluate the given integral so we try to make it into something we know. Using symmetry works well when functions are symmetric around the origin or you have inverse functions.

--------------------

Practice Problem: Evaluate $\displaystyle \int_0^1 \left(\sqrt[3]{1-x^7}-\sqrt[7]{1-x^3}\right) dx$.

## Monday, May 15, 2006

### I Hope That's All. Topic: Calculus. Level: AIME.

Problem: (Problem-Solving Through Problems - 6.7.6) Calculate

$\displaystyle \lim_{x \to \infty} \left(x \int_0^x e^{t^2-x^2} dt \right)$.

Solution: First, we want to simplify it. Note that $x$ is not the variable of integration, which means it can be regarded as a constant. In particular, we have

$\displaystyle x \int_0^x e^{t^2-x^2} dt = xe^{-x^2} \int_0^x e^{t^2} dt$.

We notice that $e^{t^2}$ is not fun to integrate, so we think L'Hopital's Rule. Rewrite the limit as

$\lim_{x \to \infty} \left(\frac{\int_0^x e^{t^2} dt}{\frac{e^{x^2}}{x}} \right)$.

This is a $\frac{\infty}{\infty}$ indeterminate form, so we can apply L'Hopital's Rule. Our limit becomes (after differentiating the top and the bottom)

$\displaystyle \lim_{x \to \infty} \left(\frac{e^{x^2}}{\frac{2x^2e^{x^2}-e^{x^2}}{x^2}}\right) = \lim_{x \to \infty} \left(\frac{x^2}{2x^2-1}\right) = \frac{1}{2}$.

QED.

--------------------

Practice Problem: (Problem-Solving Through Problems - 6.7.2) Suppose that $f$ is a function with two continuous derivatives and $f(0) = 0$. Prove that the function $g$ defined by

$g(0) = f^{\prime}(0)$, $g(x) = \frac{f(x)}{x}$ for $x \neq 0$

has a continuous derivative.

## Sunday, May 14, 2006

### You Meany! Topic: Algebra/Calculus. Level: Olympiad.

Problem: Problem-Solving Through Problems - 6.6.9) Let $f(x)$ be differentiable on $[0,1]$ with $f(0) = 0$ and $f(1) = 1$. For each positive integer $n$ and arbitrary given positive numbers $k_1, k_2, \ldots, k_n$, show that there exist distinct $x_1, x_2, \ldots, x_n$ such that

$\displaystyle \sum_{i=1}^n \frac{k_i}{f^{\prime}(x_i)} = \sum_{i=1}^n k_i$.

Solution: For convenience, define $S_m = k_1+k_2+\cdots+k_m$ for $m = 1, 2, \ldots, n$. Let $a_i$, $i = 1, 2, \ldots, n$, be a sequence of positive reals defined by: $a_i$ is the smallest positive real satisfying $f(a_i) = \frac{S_i}{S_n}$. The existence of each $a_i$ is guaranteed by the Intermediate Value Theorem since $0 < a_i \le 1$ for all $i$. In addition, let $a_0 = 0$.

By the Mean Value Theorem, there exist $x_i$ for $i = 1, 2, \ldots, n$ such that

$f^{\prime}(x_i) = \frac{f(a_i)-f(a_{i-1})}{a_i-a_{i-1}} = \frac{\frac{S_i-S_{i-1}}{S_n}}{a_i-a_{i-1}} = \frac{k_i}{S_n(a_i-a_{i-1})}$

because $S_i-S_{i-1} = k_i$. Then

$\frac{k_i}{f^{\prime}(x_i)} = S_n(a_i-a_{i-1})$.

So

$\displaystyle \sum_{i=1}^n \frac{k_i}{f^{\prime}(x_i)} = S_n(a_n-a_{n-1}+a_{n-1}-a_{n-2}+\cdots+a_1-a_0) = S_n(a_n-a_0) = S_n$

as desired, since $a_n = 1, a_0 = 0$. QED.

--------------------

Comment: Probably the best application of the Mean Value Theorem and Intermediate Value Theorem together I've ever seen. This problem is a really good exercise in thinking creatively to achieve the desired result.

--------------------

Practice Problem: (Problem-Solving Through Problems - 6.6.2) Let $f: \mathbb{R} \to \mathbb{R}$ be a function such that for all reals $x$ and $y$, $|f(x)-f(y)| \le (x-y)^2$. Prove that $f$ is a constant function.

## Saturday, May 13, 2006

### All Crossed Up. Topic: Algebra/Calculus. Level: AIME/Olympiad.

Problem: (Problem-Solving Through Problems - 6.5.6) Suppose that $f: [0,1] \to \mathbb{R}$ is differentiable, $f(0) = 0$, and $f(x) > 0$ for $x \in (0,1)$. Let $k$ be a positive integer. Prove that there is a number $c$ in $(0,1)$ such that

$\frac{kf^{\prime}(c)}{f(c)} = \frac{f^{\prime}(1-c)}{f(1-c)}$. [Generalized]

Solution: Consider the function $g(x) = f^k(x)f(1-x)$; it is clearly also differentiable. Note that $g(0) = g(1) = 0$. Thus by Rolle's Theorem we know that there exists a $c \in (0,1)$ such that $g^{\prime}(c) = 0$. But then

$g^{\prime}(c) = kf^{k-1}(c)f^{\prime}(c)f(1-c)-f^k(c)f^{\prime}(1-c) = 0$

$f^{k-1}(c)[kf^{\prime}(c)f(1-c)-f(c)f^{\prime}(1-c) = 0$.

Since $f(c) > 0 \Rightarrow f^{k-1}(c) > 0$, we must have

$kf^{\prime}(c)f(1-c)-f(c)f^{\prime}(1-c) = 0 \Rightarrow \frac{kf^{\prime}(c)}{f(c)} = \frac{f^{\prime}(1-c)}{f(1-c)}$

as desired. QED.

--------------------

Comment: Inspiration for this method comes from cross multiplying and realizing that it looks a lot like the product rule; in fact, for $k = 1$ it is just that. Then taking the fact that $f(x) > 0$ for $x \in (0,1)$ allows us to multiply by $f(c)$ any number of times to obtain the necessary function for integration.

--------------------

Practice Problem: (Problem-Solving Through Problems - 6.5.11) If $f: \mathbb{R} \to \mathbb{R}$ is a differentiable function and $a$ is an aribtrary real, prove there is a root of $f^{\prime}(x)-af(x) = 0$ between any two roots of $f(x) = 0$.

## Friday, May 12, 2006

### Really Rooty Real Roots. Topic: Algebra/Calculus. Level: AIME/Olympiad.

Problem: (Problem-Solving Through Problems - 6.5.5(b)) If $a_0, a_1, \ldots, a_n$ are real numbers satisfying

$\frac{a_0}{1}+\frac{a_1}{2}+\cdots+\frac{a_n}{n+1} = 0$,

show that the equation $a_0+a_1x+\cdots+a_nx^n = 0$ as at least one real root.

Solution: Consider integrating this function over the interval $[0,1]$. Let $f(x) = a_0+a_1x+\cdots+a_nx^n = 0$. We have

$\displaystyle \int_0^1 (a_0+a_1x+\cdots+a_nx^n)dx = \left[\frac{a_0x}{1}+\frac{a_1x^2}{2}+\cdots+\frac{a_nx^{n+1}}{n+1}\right]^1_0 = \frac{a_0}{1}+\frac{a_1}{2}+\cdots+\frac{a_n}{n+1} = 0$.

If $f$ had no real roots on the interval $[0,1]$ it would be strictly positive or negative. Then the integral of $f$ over $[0,1]$ would also be strictly positive or negative, respectively. Hence $f$ must take on both positive and negative values. But since $f$ is continuous, we know that there must be a $c \in [0,1]$ such that $f(c) = 0$ and therefore $f$ has a real root. QED.

--------------------

Comment: The condition involving the fractions should give away integration; from there, it's not hard to see that we want the interval $[0,1]$ and this gives us the result immediately.

--------------------

Practice Problem: (Problem-Solving Through Problems - 6.5.5(a)) Show that $5x^4-4x+1$ has a root between $0$ and $1$.

## Wednesday, May 10, 2006

### Cos It's A Polynomial. Topic: Algebra/Polynomials/Trigonometry. Level: AIME/Olympiad.

Problem: (Problem-Solving Through Problems - 6.2.7) Prove that the trigonometric polynomial

$a_0+a_1\cos{x}+\cdots+a_n\cos{nx}$,

where the coefficients are all real and $|a_0|+|a_1|+\cdots+|a_{n-1}| \le a_n$, has at least $2n$ zeros in the interval $[0, 2\pi)$.

Solution: Let $f_n(x) = a_0+a_1\cos{x}+a_2\cos{2x}+\cdots+a_n\cos{nx}$ and $k$ be a positive integer. We examine $f_n\left(\frac{(2k-1) \pi }{n}\right)$ and $f_n\left(\frac{2 k \pi}{n}\right)$.

$f_n\left(\frac{(2k-1) \pi}{n}\right) = a_0+a_1\cos{\frac{(2k-1) \pi}{n}+\cdots-a_n$

and

$f_n\left(\frac{2 k \pi}{n}\right) = a_0+a_1\cos{\frac{2 k \pi}{n}}+\cdots+a_n$.

Since $|a_0|+|a_1|+\cdots+|a_{n-1}| > a_0+a_1\cos{x}+\cdots+a_{n-1}\cos{(n-1)x}$ (strict inequality because not all of the cosines can equal $\pm 1$ at the same time - unless $n = 0,1$ which can be handled easily), we have

$f_n\left(\frac{(2k-1) \pi}{n}\right) < |a_0|+|a_1|+\cdots+|a_{n-1}|-a_n \le 0$

and

$f_n\left(\frac{2 k \pi}{n}\right) > -(|a_0|+|a_1|+\cdots+|a_{n-1}|)+a_n \ge 0$.

So the function alternates between postive and negative. Therefore, by the Intermediate Value Theorem, $f_n(x)$ must have a zero between $\frac{(2k-2) \pi}{n}$ and $\frac{(2k-1) \pi}{n}$ and between $\frac{(2k-1) \pi}{n}$ and $\frac{2 k \pi}{n}$ for any positive integer $k$. But since $k$ can range from $1$ to $n$, this means there must be at least $2n$ zeros, as desired. QED.

--------------------

Comment: The Intermediate Value Theorem is a good way of finding zeros; find one negative value and one positive one and there must exist a zero between them (if the function is continuous). The contrived condition on the problem also makes us think this.

--------------------

Practice Problem: (Problem-Solving Through Problems 6.2.4) Suppose $f: [0,1] \to [0,1]$ is continuous. Prove that there exists a number $c$ in $[0,1]$ such that $f(c) = c$.

## Monday, May 8, 2006

### Math Books

Check out the new section I added on Math Books.

### Find That Lattice Point. Topic: Number Theory. Level: AIME/Olympiad.

Problem: (Minkowski's Thoerem) Any bounded plane convex region $R$ symmetrical about the origin with area $> 4$ contains at least two other lattice points other than the origin itself.

Solution: WLOG, assume the lattice is a regular square lattice (with side length $1$, of course). The result can easily be generalized to different types of lattices.

Consider partitioning the lattice into $2 \times 2$ squares (with the origin being the vertex of four of them). Since the area of $R$ is greater than $4$, we can find two points in $R$ such that their coordinates relative to the $2 \times 2$ squares they are in are equivalent (consider stacking all the $2 \times 2$ squares up; there must exist a point directly above another one or the area is $\le 4$).

Call these points

$P(2a+\alpha, 2b+\beta)$ and $Q(2c+\alpha, 2d+\beta)$

with $a,b,c,d$ integers and $0 \le \alpha, \beta \le 2$ reals. But since $R$ is symmetrical about the origin, we know

$P^{\prime}(-2a-\alpha, -2b-\beta)$ and $Q^{\prime}(-2c-\alpha, -2d-\beta)$

also exist in $R$. If $R$ is convex, then any point along a line segment connecting two points in $R$ is also in $R$. In particular, the midpoint of any line segment between two points in $R$ is in $R$. Hence the midpoints of $PQ^{\prime}$ and $P^{\prime}Q$ are in $R$. They are

$M_1(a-c, b-d)$ and $M_2(c-a, d-b)$,

which are lattice points not on the origin because $a,b,c,d$ are integers and $P \neq Q$. Therefore $R$ contains at least two other lattice points $M_1$ and $M_2$. QED.

--------------------

Comment: We could have invoked symmetry at the end to find $M_2$ from $M_1$, too. From the proof, it shouldn't be too hard to tell what I assumed at the beginning for other lattices; basically you just need to define a coordinate system with the same properties . I first encountered this problem at PROMYS and it appears in most Number Theory texts, as far as I know.

--------------------

Practice Problem: Can you generalize this result to a $3$-space? An $n$-space?

## Sunday, May 7, 2006

### Lagrange = The Grange = Populists??? Topic: Calculus/Inequalities. Level: Olympiad.

Definition: The gradient of a function $f(x, y, z)$ is defined by

$\bigtriangledown f(x, y, z) = f_x(x, y, z) \vec{i}+f_y(x, y, z) \vec{j}+f_z(x, y, z) \vec{k}$,

where $f_x(x, y, z)$, $f_y(x, y, z)$, and $f_z(x, y, z)$ are the partial derivatives of $f$ with respect to $x$, $y$, and $z$, respectively.

--------------------

Theorem: (Lagrange Multipliers) Let $f(x, y, z)$ and $g(x, y, z)$ be functions with continuous first partial derivatives on some open set containing the constraint surface $g(x, y, z) = 0$ and $\bigtriangledown g \neq 0$ at any point on this surface. If $f$ has a relative extremum, then it occurs at a point $(x_0, y_0, z_0)$ on the contraint surface such that there exists some constant $\lambda$ satisfying

$\bigtriangledown f(x_0, y_0, z_0) = \lambda \bigtriangledown g(x_0, y_0, z_0)$.

--------------------

Problem: (1999 Poland) Let $a,b,c$ be positive reals satisfying $a^2+b^2+c^2 = 1$. Prove that

$a+b+c+\frac{1}{abc} \ge 4\sqrt{3}$.

Solution: Several approaches work, but I'll do it with Lagrange Multipliers to illustrate their usefulness. We let $g(a,b,c) = a^2+b^2+c^2-1$ and $f(a,b,c) = a+b+c+\frac{1}{abc}$. Then

$\bigtriangledown g = 2a\vec{i}+2b\vec{j}+2c\vec{k}$

and

$\bigtriangledown f = \left(1-\frac{1}{a^2bc}\right)\vec{i} + \left(1-\frac{1}{ab^2c}\right)\vec{j} + \left(1-\frac{1}{abc^2}\right)\vec{k}$.

If $\lambda$ is a constant such that $\bigtriangledown f = \lambda \bigtriangledown g$, then we have

$\left(1-\frac{1}{a^2bc}\right)\vec{i} + \left(1-\frac{1}{ab^2c}\right)\vec{j} + \left(1-\frac{1}{abc^2}\right)\vec{k} = \lambda (2a\vec{i}+2b\vec{j}+2c\vec{k})$.

Looking at the $\vec{i}$ and $\vec{j}$ components, we obtain

$\lambda = \frac{1-\frac{1}{a^2bc}}{2a} = \frac{1-\frac{1}{ab^2c}}{2b}$ so then $a^2b^3c-b^2 = a^3b^2c-a^2$ after multiplying through.

This factors as

$(a-b)(a^2b^2c-a-b) = 0$,

so $a = b$ or $c = \frac{a+b}{a^2b^2}$. In the latter case, however, since $a, b \le 1$ we know $a \ge a^2b^2$ and $b \ge a^2b^2$ so $\frac{a+b}{a^2b^2} \ge 2$, which is impossible for $c$. Hence we must have $a = b$.

Similarly, we obtain $b = c$, so $a = b = c = \frac{1}{\sqrt{3}}$ is a relative extremum. Since this is the only extremum with all positive values, we can simply check any other positive case to verify that it is a minimum.

Hence the minimum value of $f$ is $f\left(\frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}\right) = 4\sqrt{3}$, as desired. QED.

--------------------

Comment: Lagrange Multipliers can usually get pretty ugly, but they can also be very effective in certain situations. For people without knowledge of classical inequalities, this is probably the way to go in optimization problems.

--------------------

Practice Problem: Find the minimum and maximum of $f(x, y, z) = x^4+y^4+z^4$ satisfying $x+y+z = 1$.

## Friday, May 5, 2006

### Vernam Cipher. Topic: Cryptography.

Definition: (Vernam Cipher) Given a plaintext $A$ (message you want to encode, represented by the numbers $0$ through $26$) of length $N$ characters and a suitably random key $K$ of $N$ characters, the ciphertext $B$ (encoded message, also represented by the numbers $0$ through $26$) is generated by

$B(i) = A(i)+K(i) \pmod{26}$,

where $X(i)$ denotes the value of the $i$th character of a string $X$.

--------------------

Definition: Let $P(a)$ be the probability of event $a$. Let $P(a;b)$ be the probability that both $a$ and $b$ occur and $P(a|b)$ be the probability that $a$ occurs given that $b$ does.

--------------------

Problem: Prove that the Vernam Cipher is secure - that is, the probability of $A(i)$ being a certain character given $B(i)$ is the same as the probability of $A(i)$ being that character not given $B(i)$.

Solution: Let $m, n$ be an arbitrary characters. We establish that $P(A(i) = m)$ and $P(B(i) = n)$ are independent events because $K$ is arbitrarily generated (here the suitable randomness comes into play). So $P(A(i) = m; B(i) = n) = P(A(i) = m) \cdot P(B(i) = n)$.

But then

$P(A(i) = m) | B(i) = n) = \frac{P(A(i) = m; B(i) = n)}{P(B(i) = n)} = \frac{P(A(i) = m) \cdot P(B(i) = n)}{P(B(i) = n)} = P(A(i) = m)$

as desired. So knowing the key will not help at all in determining the plaintext, resulting in a secure cipher. QED.

--------------------

Comment: Problems with this cipher lie in the suitable randomness. Creating such keys (and distributing them) proves to be a more difficult task than it sounds.