## Saturday, September 30, 2006

### Eww... Topic: Algebra/Inequalties. Level: AIME/Olympiad.

Problem: Find the minimum value of

$\frac{\left(x+\frac{1}{x}\right)^6-\left(x^6+\frac{1}{x^6}\right)-2}{ \left(x+\frac{1}{x}\right)^3-\left(x^3+\frac{1}{x^3}\right)}$

for $x > 0$.

Solution: Make a handy substitution $a = x$, $b = \frac{1}{x}$ and $1 = a^3b^3$. Then we want to minimize

$\frac{(a+b)^6-(a^6+b^6)-2a^3b^3}{(a+b)^3-(a^3+b^3)} = \frac{2a^4+5a^3b+6a^2b^2+5ab^3+2b^4}{a+b}$

after cancelling $3ab$. But we see that this evenly divides and becomes

$2a^3+3a^2b+3ab^2+2b^2 = (a+b)^3+(a^3+b^3)$.

But $a+b \ge 2$ and $a^3+b^3 \ge 2$ by AM-GM and the same equality case, so

$(a+b)^3+a^3+b^3 \ge 10$

with equality at $a = b = 1$. QED.

--------------------

Comment: It was very useful to make the substitution and then homogenize the numerator so you could cancel things out evenly. Then classical inequalities (AM-GM) helped us finish it.

--------------------

Practice Problem: Prove that $(a^3+b^3+c^3-3abc)(a+b+c) \ge 0$.

## Wednesday, September 27, 2006

### This Is Mental Math?!? Topic: Number Theory. Level: AIME.

Problem: (2006-2007 Warm-Up 2) Let $f(x)$ be the sum of the digits of $x$. Find $f(f(f(8765^{4321})))$.

Solution: Let's try bounding our quantity. We know

$f(x) < 9 \cdot \lceil \log{x} \rceil$

because $\lceil \log{x} \rceil$ is the number of digits of $x$ (except when $x$ is a power of ten, but those are trivial) and the biggest digit is $9$. So we can say that

$f(8765^{4321}) < 9 \cdot \lceil \log{8765^{4321}} \rceil < 9 \cdot \log{10000^{10000}} = 360000$

so

$f(f(8765^{4321})) < 9 \cdot \lceil \log{360000} \rceil < 9 \cdot \log{1000000} = 54$

and

$f(f(f(8765^{4321}))) < 13$

because the largest sum of digits of any number less than $54$ is $4+9 = 13$. So we know our value is less than $13$. But remember that the sum of the digits of a number is equal to itself $\pmod{9}$ (prove this!) and $8765^{4321} \equiv (-1)^{4321} \equiv 8 \pmod{9}$ so our answer is also $8 \pmod{9}$. But the only number less than $13$ that works is then $8$. QED.

--------------------

Comment: Admittedly, this is not likely to show up on a mental math test, but it's feasible to do in your head. The bounding is not horrific and taking the mod is also not bad. Of course, $45$ seconds is pretty quick, but if you knew how to approach the problem already, it was definitely possible (and done).

--------------------

Practice Problem: Show that the sum of the digits of a number is congruent to itself modulo nine.

## Monday, September 25, 2006

### Parity Check. Topic: Logic/NT. Level: AIME/Olympiad.

Problem: (2000 Putnam - B1) Let $a_j, b_j, c_j$ be integers for $1 \le j \le N$. Assume for each $j$, at least one of $a, b, c$ is odd. Show that exist integers $r, s, t$ such that $ra_j+sb_j+tc_j$ is odd for at least $\frac{4N}{7}$ values of $j$, $1 \le j \le N$.

Solution: A pretty notation-heavy, strange, contrived problem. Let's get started. Where do we get $7$ in the denominator? How about from the number of binary triples that contain at least one $1$? In fact we may simplify the problem so that $a_j, b_j, c_j, r, s, t$ are all in the set $\{0, 1\}$ because we only care about even and odd. Cool.

Now let's see... for which binary triples $(a_j, b_j, c_j)$ and $(r, s, t)$ is $ra_j+sb_j+tc_j$ odd? Try it out. We will list the binary triples and give them variable names to simplify our lives.

$A = (0, 0, 1)$
$B = (0, 1, 0)$
$C = (0, 1, 1)$
$D = (1, 0, 0)$
$E = (1, 0, 1)$
$F = (1, 1, 0)$
$G = (1, 1, 1)$.

Now we will write $ra_j+sb_j+tc_j$ as $U \cdot V$ where $U = (r, s, t)$ and $V = (a_j, b_j, c_j)$. We make the following table:

. A B C D E F GA X . X . X . XB . X X . . X XC X X . . X X .D . . . X X X XE X . X X . X .F . X X X X . .G X X . X . . X

where an X in the entry in row $U$ and column $V$ signifies that $U \cdot V$ is odd. Notice that each choice of $(r, s, t)$ makes four odd and each choice of $(a_j, b_j, c_j)$ has four possible $(r, s, t)$ to make it odd. Let $k_U$ denote the number of times $U \cdot (a_j, b_j, c_j)$ is odd for $1 \le j \le N$. We want to show that there exists a $U$ such that $k_U \ge \frac{4N}{7}$. We know

$\displaystyle \sum_{U=A}^G k_U$

is the number of times we have an odd sum if we apply every possible $(r, s, t)$. But if we apply every $(r, s, t)$ we know from our table that each of the $N$ binary triples $(a_j, b_j, c_j)$ produces an odd for four of the seven $(r, s, t)$. So in fact

$\displaystyle \sum_{U=A}^G k_U = 4N$.

But since the average of the $k_U$'s is then $\frac{4N}{7}$, there must exist at least one $k_U$ such that $k_U \ge \frac{4N}{7}$, as desired. QED.

--------------------

Comment: IMO, a very cool problem. Something that at first glance you sort of stare blankly at how they got such a ridiculous problem but then you realize it is a Putnam B1 so it couldn't possibly that bad, which is soon followed by your realization of the simplification to binary and finally the recognition of the seven as the number of binary triples with at least one $1$. Yay!

--------------------

Practice Problem: I have seven light bulbs which are off and seven switches labeled with the binary triples above. A switch turns on a light bulb if the dot product (as above) of the number of the switch and the number of the light bulb is odd. Show that if I flip each switch once the light bulbs will still all be off.

## Saturday, September 23, 2006

### Pell. Topic: NT. Level: AIME/Olympiad.

Problem: (2000 Putnam - A2) Prove that there exist infinitely many integers $n$ such that $n, n + 1, n + 2$ are each the sum of the squares of two integers. [Example: $0 = 0^2 + 0^2$, $1 = 0^2 + 1^2$, $2 = 1^2 + 1^2$.]

Solution: Consider the Pell Equation

$x^2-2y^2 = 1$.

It is well-known that this equation has an infinite number of positive integer solutions $(x,y)$. This can be derived using truncated continued fractions representation of $\sqrt{2}$ (see here). So if we let $n = 2y^2 = y^2+y^2$, we have

$n+1 = x^2+0^2$

$n+2 = x^2+1^2$,

completing the problem. QED.

--------------------

Comment: This was one of the official solutions to the problem, even though all you had to know was that Pell Equations have an infinite number of solutions. There is a constructive solution to the problem as well, based on choosing certain $n$.

--------------------

Practice Problem: Find the constructive solution (i.e. all numbers of the form $n = f(k)$ work for some function $f$).

## Friday, September 22, 2006

### Infinite... POWER! Topic: Algebra/Inequalities/S&S. Level: AIME/Olympiad.

Problem: (2000 Putnam - A1) Let $A$ be a positive real number. What are the possible values of $\displaystyle \sum_{j=0}^{\infty} x_j^2$, given that $x_0, x_1, \ldots$ are positive numbers such that $\displaystyle \sum_{j=0}^{\infty} x_j = A$?

Solution: First, we will try to guess what the range is using inequalities. We obviously have

$\displaystyle \sum x_j^2 > 0$

because all the $x_j$ are positive. Also,

$\displaystyle \sum x_j^2 < \left(\sum x_j\right)^2 = A^2$

by "expansion" (infinitely, but that's ok). So for now, our guess is the interval $(0, A^2)$. We want to show that given any $\alpha \in (0, A^2)$ we can make a sequence such that $\displaystyle \sum x_j = A$ and $\displaystyle \sum x_j^2 = \alpha$. Well, we like easy sequences so consider if $x_j$ is a geometric sequence with common ratio $r$. Then

$\displaystyle \sum x_j = \frac{x_0}{1-r} = A$.

We want to show that given any $\alpha$, we can find a positive real $r$ satisfying $r < 1$, the above equation, and

$\displaystyle \sum x_j^2 = \frac{x_0^2}{1-r^2} = \alpha$.

Rewriting the two equations (and squaring the first), we have

$x_0^2 = A^2(1-r)^2 = \alpha(1-r^2)$

so

$(A^2+\alpha)r^2-(2A^2)r+(A^2-\alpha)$

$(r-1)[(A^2+\alpha)r-(A^2-\alpha)]$.

Since we want $|r| < 1$, we must use the second factor, which gives us

$r = \frac{A^2-\alpha}{A^2+\alpha}$.

But if $\alpha \in (0, A^2)$, clearly $A^2 - \alpha < A^2 + \alpha \Rightarrow \frac{A^2-\alpha}{A^2+\alpha} < 1$, so there exists some positive ratio such that $\displaystyle \sum x_j = A$ and $\displaystyle \sum x_j^2 = \alpha$. Hence the possible values are $(0, A^2)$. QED.

--------------------

Comment: A surprisingly difficult A1 problem on a Putnam; it had less perfect scores than A2, B1, and B2 (which will probably be posted over the next week or so). It required some experience with inequalities to guess the range and then a clever assumption of a geometric series to complete the proof.

--------------------

Practice Problem: What if the terms could be any nonzero real? What are the possible values of $\displaystyle \sum x_j^2$ then?

## Tuesday, September 19, 2006

### Stop Drawing On the Checkerboard! Topic: Logic.

Problem: (2001 Putnam - B1) Let $n$ be an even positive integer. Write the numbers $1, 2, \ldots, n^2$ in the squares of an $n \times n$ grid so that the $k$th row, from left to right, is

$(k-1)n+1, (k-1)n+2, \ldots, (k-1)n+n$.

Color the squares of the grid so that half of the squares in each row and in each column are red and the other half are black (a checkerboard coloring is one possibility). Prove that for each coloring, the sum of the numbers on the red squares is equal to the sum of the numbers on the black squares.

Solution: Consider all the values of the red squares $r_1, r_2, \ldots, r_{k}$ and all the values of the black squares $b_1, b_2, \ldots, b_{l}$. For each square (of any color) $a_i$, let

$a_i = R(a_i)n+C(a_i)$,

where $R(a_i)+1$ and $C(a_i)$ are the row and column of the square corresponding to $a_i$. We want to show that

$\displaystyle \sum r_i = \sum b_i$.

But we have

$\displaystyle \sum r_i = \sum R(r_i)n + \sum C(r_i)$ and $\displaystyle \sum b_i = \sum R(b_i)n + \sum C(b_i)$.

Since each row has half red and half black, we have

$\displaystyle \sum R(r_i)n = \sum R(b_i)n$

for each row and consequently the whole board. Similarly, since each column has half red and half black, we have

$\displaystyle \sum C(r_i) = \sum C(b_i)$

for each column and hence the whole board. Adding these two equalities together, we obtain

$\displaystyle \sum r_i = \sum b_i$

as desired. QED.

--------------------

Comment: Not too hard intuitively, but slightly difficult to put down a rigorous proof; assigning variables can be a good way of going about solving word problems. It might simplify the problem down to some simple algebra, which is good.

--------------------

Practice Problem: Let $n$ be a positive even integer. In how many ways can we fill an $n \times n$ board with $1$'s and $-1$'s such that the sum of each row and column is zero?

## Sunday, September 17, 2006

### Coin Flip. Topic: Probability/S&S. Level: AIME.

Problem: (2001 Putnam - A2) You have coins $C_1, C_2, \ldots, C_n$. For each $k$, $C_k$ is biased so that, when tossed, it has probability $\frac{1}{2k+1}$ of falling heads. If the $n$ coins are tossed, what is the probability that the number of heads is odd? Express your answer as a rational function of $n$.

Solution: As usual, we try it out. Let $P_k$ be the probability that we have an odd number of heads after $k$ tosses. We find that

$P_1 = \frac{1}{3}$, $P_2 = \frac{2}{5}$, $P_3 = \frac{3}{7}$.

Now we guess that $P_k = \frac{k}{2k+1}$, which we will prove by our favorite tool, induction. Suppose it is true for $k$. On the $(k+1)$th flip, if we previously had an odd number we want a tails and if we previously had an even number we want a heads. So

$P_{k+1} = \frac{2k+2}{2k+3}P_k+\frac{1}{2k+3}(1-P_k) = \frac{1}{2k+3}+\frac{2k+1}{2k+3}P_k$.

Plugging in our induction hypothesis, we get

$P_{k+1} = \frac{1}{2k+3}+\frac{2k+1}{2k+3} \cdot \frac{k}{2k+1} = \frac{k+1}{2k+3}$,

which is what we want. Hence we have

$P_n = \frac{n}{2n+1}$.

QED.

--------------------

Comment: A classic probability recursion problem, reminiscent of some old AIME problems. It should not have been too difficult to notice the pattern and figure out that induction is the way to go. It also has ties to Dynamic Programming, one of the most interesting concepts in computer science, which is based on solving problems using the solutions of subproblems (in this case, less coins).

--------------------

Practice Problem: (1990 AIME - #9) A fair coin is to be tossed $10$ times. Let $\frac{i}{j}$, in lowest terms, be the probability that heads never occur on consecutive tosses. Find $i+j$.

## Friday, September 15, 2006

### Binary Ops. Topic: Algebra/Sets. Level: AMC/AIME.

Problem: (2001 Putnam - A1) Consider a set $S$ and a binary operation $*$, i.e., for each $a,b \in S$, $a*b \in S$. Assume $(a*b)*a = b$ for all $a, b \in S$. Prove that $a*(b*a) = b$ for all $a, b \in S$.

Solution: If we make the substitution $a = b*a$, we get

$((b*a)*b)*(b*a) = b$.

However, since $(b*a)*b = a$ (by switching $a$ and $b$), we thus have

$a*(b*a) = b$,

as desired. QED.

--------------------

Comment: Not an extremely hard problem, but making the right substitution took some time to find. Playing around with the given equality is pretty much the only option in this problem.

--------------------

Practice Problem: Find a set $S$ (preferably nontrivial) and operation $*$ satisfying the conditions of the problem above.

## Tuesday, September 12, 2006

### Multiply By Something. Topic: Calculus.

Problem: Solve the differential equation $y^{\prime\prime}+3y^{\prime}-4y = 0$.

Solution: Let's conveniently rewrite this as

$y^{\prime\prime}+4y^{\prime} = y^{\prime}+4y$

and multiply by something, perhaps $e^{4x}$. And then integerate, which is

$\displaystyle \int (y^{\prime\prime}+4y^{\prime})e^{4x} = \int (y^{\prime}+4y)e^{4x}$.

Conveniently enough, we see that this is

$y^{\prime}e^{4x} = ye^{4x}+C_1$

(you can verify this by taking the derivative). Well we might as well be awesome and try that trick again. Multiply the whole thing by $e^{-5x}$ now and integrate

$\displaystyle -\int C_1 e^{-5x} = \int (y-y^{\prime})e^{-x}$.

The LHS is easy and integrating the RHS makes the equation

$\frac{1}{5}C_1e^{-5x} = -ye^{-x}+C_2$

(again, check by differentiating). Now solving for $y$, we have

$y = -\frac{1}{5}C_1 e^{-4x}+C_2 e^x = Ae^{-4x}+Be^x$

as the general solution. QED.

--------------------

Comment: A cool method for solving differential equations; it actually works for a lot of them, but obviously doesn't work for a lot, too. In any case, it took me some time to develop this method during class.

--------------------

Practice Problem: Solve the differential equation $y^{\prime\prime\prime}-y^{\prime} = 0$.

## Monday, September 11, 2006

### Gimme Your Signature. Topic: Geometry/Logic. Level: AIME.

Problem: (2002 Putnam - B2) Consider a polyhedron with at least five faces such that exactly three edges emerge from each of its vertices.

Two players play the following game:

Each player, in turn, signs his or her name on a previously unsigned face. The winner is the player who first succeeds in signing three faces that share a common vertex.

Show that the player who signs first will always win by playing as well as possible.

Solution: First, we claim that the polyhedron contains a face with at least four sides. To prove this, suppose the contrary, that all faces are triangles. Take any three faces that share a vertex. Called the shared vertex $A_1$ and the other vertices $A_2, A_3, A_4$. Since each vertex already has three edges emerging from it, we cannot add a new vertex; hence we will have at most four faces. Contradiction. So there exists a face with at least four sides.

Let $a$ be this face. We claim that if Player 1 signs on $a$ first, he or she can win on the third move. Let $a_0, a_1, \ldots, a_{k-1}$ be the faces that share an edge with $a$, where $k \ge 4$ because there are at least four such faces. If Player 2 does not sign on any of these faces, Player 1 can sign on any $a_i$ and the following turn can win with $a_{i+1}$ or $a_{i-1}$ (indices taken modulo $k$) since Player 2 cannot block both of them. If Player 2 does sign on $a_i$, then Player 1 signs on $a_j$ with $j \neq i \pm 1$ (again, modulo $k$), which exists because $k \ge 4$. Then again Player 1 can sign $a_{j \pm 1}$ the next turn to win, at most one of which can be blocked by Player 2. QED.

--------------------

Comment: Not really too hard of a problem; the only real challenge was showing that there exists a face with at least four edges. And the proof of that was not complex either. Just simple logic and a bit of geometry to finish it.

--------------------

Practice Problem: Suppose an ant is on the vertex of a cube. Each second, it can move along one edge. What is the probability that after $60$ seconds it will end up at the opposite vertex?

## Sunday, September 10, 2006

### Hem A Sphere. Topic: Geometry/Logic. Level: AIME.

Problem: (2002 Putnam - A2) Given any five points on a sphere, show that some four of them must lie on a closed hemisphere.

Solution: Well, that doesn't sound so bad. And, in fact, it really isn't. First, let's maximize the number of points on the boundary of a great circle; this is, in general, two. We can't arbitrarily choose three points and always draw a great circle through all three (think about it).

So now we have a great circle through two points. Well, maybe these hemispheres work. There are three points left, so by the Pigeonhole Principle one of the closed hemispheres has at least four points. Whoa, we're done. QED.

--------------------

Comment: That was almost too simple. Especially for a Putnam problem; shows you how knowing how to think can take you a long ways even if you don't know much math.

--------------------

Practice Problem: Can you generalize this to an $n$-dimensional sphere?

## Saturday, September 9, 2006

Problem: (2002 Putnam - A1) Let $k$ be fixed positive integer. The $n$-th derivative of $\frac{1}{x^k-1}$ has the form $\frac{P_n(x)}{(x^k-1)^{n+1}}$, where $P_n(x)$ is a polynomial. Find $P_n(1)$.

Solution: From the looks of this (and testing it), the derivative turns out to be really nasty and no fun at all. So let's try simplifying it using the notation they give us (i.e. the $P_n(x)$ polynomials).

If the $n$-th derivative is $\frac{P_n(x)}{(x^k-1)^{n+1}}$, then the $(n+1)$-th derivative is, by the quotient rule,

$\frac{P^{\prime}_n(x)(x^k-1)^{n+1}-P_n(x)(n+1)(x^k-1)^nkx^{k-1}}{(x^k-1)^{2n+2}} = \frac{P^{\prime}_n(x)(x^k-1)-P_n(x)(n+1)kx^{k-1}}{(x^k-1)^{n+2}}$

after we cancel $(x^k-1)^n$. But this basically gives us a recursive relationship between $P_{n+1}(x)$ and $P_n(x)$...

$P_{n+1}(x) = P^{\prime}_n(x)(x^k-1)-P_n(x)(n+1)kx^{k-1}$

$P_{n+1}(1) = P_n(1)(n+1)(-k)$.

Wow. That looks good. Seems that we're almost there. Setting $P_0(1) = 1$ (no derivatives), we can easily show by induction that

$P_n(x) = (-k)^n \cdot n!$,

finishing the problem. QED.

--------------------

Comment: Looked a lot more difficult than it actually was, especially with the ugly derivatives. After we figured out that the quotient rule allowed us to derive a simple recursion, it was basically done from there. Solving for the sequence should be clear that a factorial results from the $(n+1)$ term in the recursive definition.

--------------------

Practice Problem: (2002 Putnam - B1) Shanille Oâ€™Keal shoots free throws on a basketball court. She hits the first and misses the second, and thereafter the probability that she hits the next shot is equal to the proportion of shots she has hit so far. What is the probability she hits exactly $50$ of her first $100$ shots?

## Wednesday, September 6, 2006

### Determinated! Topic: NT/S&S. Level: Olympiad.

Problem: (2004 Putnam - A3) Define a sequence $\displaystyle \{u_n\}^{\infty}_{n=0}$ by $u_0 = u_1 = u_2 = 1$, and thereafter by the condition that

$u_n u_{n+3}-u_{n+1}u_{n+2} = n!$

for all $n \ge 0$. Show that $u_n$ is an integer for all $n$. (By convention, $0! = 1$.) [Reworded: there originally contained a determinant]

Solution: Rewrite the given condition as

$u_{n+3} = \frac{n!+u_{n+1}u_{n+2}}{u_n}$.

First, let's calculate some terms; we get

$1, 1, 1, 2, 3, 8, 15, 48, 105$.

Now, this is interesting sequence, but we manage to notice that $u_n u_{n+1} = n!$, or $u_{n+1} = \frac{n!}{u_n}$. Aha! So we'll prove this by induction. Assuming it is true for $1, 2, \ldots, n+2$ (base case is trivial), we have

$u_{n+3} = \frac{n!+u_{n+1} u_{n+2}}{u_n}$

and we make the substitutions $u_{n+1} = \frac{n!}{u_n}$, $u_{n+2} = (n+1)u_n$, and $u_n = \frac{u_{n+2}}{n+1}$ (yes the last two are equivalent) to get

$u_{n+3} = \frac{n!+\frac{n!}{u_n} (n+1) u_n}{\frac{u_{n+2}}{n+1}} = \frac{(n+2)!}{u_{n+2}}$

after a big mess of algebra. So it's true. Let's induct again (assume true for $1, 2, \ldots, n+1$) to show that $u_{n+2}|(n+2)!$. Remember we had $u_{n+2} = (n+1)u_n$, but $u_n|n!$ so $(n+1)u_n|(n+2)! \Rightarrow u_{n+2}|(n+2)!$ as desired. Hence each $u_n$ is an integer. QED.

--------------------

Comment: Whew, another way (the official solution) would be to notice that $u_n = (n-1)(n-3) \cdots (4)(2)$ for odd $n$ and $u_n = (n-1)(n-3) \cdots (3)(1)$ for even $n$ and induct. That seems rather unnatural for people with non-godly observation skills, though.

--------------------

Practice Problem: (2004 Putnam - B2) Let $m$ and $n$ be positive integers. Show that

$\frac{(m+n)!}{(m+n)^{m+n}} < \frac{m!}{m^m} \cdot \frac{n!}{n^n}$.

[Hint: look for a one-line solution.]

## Monday, September 4, 2006

### Throw! Topic: NT/Probability. Level: AIME.

Problem: (2004 Putnam - A1) Basketball star Shanille Oâ€™Kealâ€™s team statistician keeps track of the number, $S(N)$, of successful free throws she has made in her first N attempts of the season. Early in the season, $S(N)$ was less than $80%$ of $N$, but by the end of the season, $S(N)$ was more than $80%$ of $N$. Was there necessarily a moment in between when $S(N)$ was exactly $80%$ of $N$?

Solution: We claim that the answer is yes and we will prove it by contradiction. Suppose that there isn't such a moment. Then there is some $k$ such that $S(k) < 0.8k$ and $S(k+1) > 0.8(k+1)$. Clearly, the $(k+1)$th throw must have been in for the ratio to increase. So $S(k+1) = S(k)+1$. Let $a = S(k)$. The above inequalities become

$a < 0.8k$ and $a+1 > 0.8k+0.8$,

which simplify to

$0.8k-0.2 < a < 0.8k$.

Multiply this by $5$ to get

$4k-1 < 5a < 4k$.

But since $5a$ is an integer and there are no integers between $4k-1$ and $4k$, we have a contradiction. Hence the answer is yes, as claimed. QED.

--------------------

Comment: A tricky problem; intuitively it seems like there must exist some case for which this isn't true, but after searching a bit for a counterexample, I pretty much assumed it was true and began trying to prove it. This wasn't hard as we reached a contradiction pretty easily.

--------------------

Practice Problem: For what other numbers does the above problem hold instead of $80%$?

## Saturday, September 2, 2006

### Average Joe. Topic: Algebra/NT. Level: AIME/Olympiad.

Problem: (2003 Putnam - B2) Let $n$ be a positive integer. Starting with the sequence

$1, \frac{1}{2}, \frac{1}{3}, \ldots, \frac{1}{n}$,

form a new sequence of $n-1$ entries

$\frac{3}{4}, \frac{5}{12}, \ldots, \frac{2n-1}{2n(n-1)}$

by taking the averages of of two consecutive entries in the first sequence. Repeat the averaging of neighbors on the second sequence to obtain a third sequence of $n-2$ entries, and continue until the final sequence produced consists of a single number $x_n$. Show that $x_n < \frac{2}{n}$.

Solution: What a long problem statement. But anyway, it doesn't seem too complicated; we'll just try out some sequences and see where we go...

$1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}$

ends up at

$\frac{1}{16} \left(1+4 \cdot \frac{1}{2}+6 \cdot \frac{1}{3}+4 \cdot \frac{1}{4}+\frac{1}{5}\right)$,

the coefficients of which look suspiciously like the numbers in Pascal's Triangle. We claim that

$\displaystyle x_n = \frac{1}{2^{n-1}} \sum_{i=1}^n \frac{1}{i} (n-1)C(i-1)$,

where $C$ is the combinations operator. This can be shown through an easy induction that I won't bother to go through here; work out the details if you want. So we want to show

$\displaystyle \frac{1}{2^{n-1}} \sum_{i=1}^n \frac{1}{i} (n-1)C(i-1) < \frac{2}{n}$

$\displaystyle \sum_{i=1}^n \frac{n}{i} (n-1)C(i-1) < 2^n$.

But noting the combinatoric identity $\frac{n}{i} (n-1)C(i-1) = nCi$, we have

$\displaystyle \sum_{i=1}^n \frac{n}{i} (n-1)C(i-1) = \sum_{i=1}^n nCi = 2^n-1 < 2^n$

as desired. QED.

--------------------

Comment: The problem was not difficult once you figured out the general form for $x_n$ and through a bit of experimentation and induction this was not too hard either. Knowledge of basic combinatoric identities is really useful and these things show up all the time.

--------------------

Practice Problem: (2003 Putnam - A2) Let $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$ be nonnegative real numbers. Show that

$(a_1a_2\cdots a_n)^{\frac{1}{n}}+(b_1b_2\cdots b_n)^{\frac{1}{n}} \le [(a_1+b_1)(a_2+b_2)\cdots (a_n+b_n)]^{\frac{1}{n}}$.

[Can you say... easy? For Putnam, anyway.]