Tuesday, December 26, 2006

Pretty Sorta Equal To. Topic: Sets. Level: AIME/Olympiad.

Problem: (Stanford Putnam Practice) Let $ S = \{1, 2, \ldots n\} $ and let $ S_1, S_2, \ldots, S_n $ be subsets of $ S $ satisfying $ |S_i \cap S_j| $ for $ i \neq j $. Show that $ \max(|S_1|+|S_2|+\cdots+|S_n|) $ is asymptotically $ n \sqrt{n} $.

Solution: We will start by showing that the expression is at most asymptotically $ n\sqrt{n} $. Suppose it is asymptotically $ nk $. Then $ |S_i| \sim k $. And on average, any element will be contained in $ k $ of the $ S_i $. Consider the $ k $ sets that contain the element $ 1 $. The remaining $ k-1 $ elements of each subset must all be different, so there must be at least $ k(k-1)+1 $ elements in the original set. So $ k(k-1)+1 \sim n \Rightarrow k \sim \sqrt{n} $ is an upper bound. Now let's show that we can obtain this upper bound.

Let $ m $ be the closest integer to $ \sqrt{n} $. Partition $ S $ into $ m $ subsets, approximately the intervals $ (1, m); (m+1, 2m); \ldots ; (n-m+1, n) $. Each part should contain about $ m $ elements (because the argument is asymptotic, it doesn't really matter). Label them by $ a_{11}, a_{12}, \ldots, a_{1m}, a_{21}, a_{22}, \ldots, a_{mm} $ where $ a_{ij} $ is the $ j $th element of the $ i $th part. We want to choose $ n $ sets of $ m $ elements satisfying the conditions of the problem; we claim that we can do this by choosing one element from each interval for each set.

First, construct $ m $ sets:

$ \{a_{11}, a_{21}, a_{31}, \ldots, a_{m1}\} $

$ \{a_{12}, a_{22}, a_{32}, \ldots, a_{m2}\} $

$ \cdots $

$ \{a_{1m}, a_{2m}, a_{3m}, \ldots, a_{mm}\} $.

Then, to create the next $ m $ sets, we take the elements which lie along the same diagonal (supposing we had a copy of those sets), for instance,

$ \{a_{11}, a_{22}, a_{33}, \ldots a_{mm}\} $

$ \{a_{21}, a_{32}, a_{43}, \ldots a_{m(m-1)}, a_{1m}\} $

$ \{a_{31}, a_{42}, a_{53}, \ldots, a_{1(m-1)}, a_{2m}\} $

$ \cdots $

$ \{a_{m1}, a_{12}, a_{23}, \ldots, a_{(m-1)m}\} $.

In fact, if we repeat this process, we will get all $ n $ sets of $ m $ elements, none of which share more than a single element. Hence we know that the value is asymptotic to $ nm \sim n\sqrt{n} $. QED.

--------------------

Comment: The above proof is very "fuzzy." It lacks a lot of the rigor needed to actually prove the result, but it gives the basic idea, which is all I was aiming for. A solid proof of the result would probably be very long and tedious (unless some other elegant method is used, or perhaps some other strong known results).

Sunday, December 24, 2006

Gimme That! Topic: Sets. Level: AIME/Olympiad.

Problem: (1996 Putnam - B1) Define a selfish set to be a set which has its own cardinality (number of elements) as an element. Find, with proof, the number of subsets of $ \{1, 2, \ldots, n\} $ which are minimal selfish sets, that is, selfish sets none of whose proper subsets is selfish.

Solution: We define $ S_n $ to be the desired number. Trying small cases, we get

$ S_1 = 1: \{1\} $

$ S_2 = 1: \{1\} $

$ S_3 = 2: \{1\}; \{2, 3\} $

$ S_4 = 3: \{1\}; \{2, 3\}; \{2, 4\} $

$ S_5 = 5 $ and $ S_6 = 8 $.

Well now we're tempted to say that $ S_n = F_n $, the $ n $th Fibonnaci number (with $ F_1 = F_2 = 1 $). To do so, we want to find a bijection between the minimal selfish subsets of $ T_n = \{1, 2, \ldots, n\} $ and the minimal selfish subsets of $ T_{n-1} = \{1, 2, \ldots, n-1\} $ and $ T_{n-2} = \{1, 2, \ldots, n-2\} $.

Clearly all the minimal selfish subsets of $ T_{n-1} $ are in $ T_n $ as well. The remaining minimal selfish subsets of $ T_n $ are those which contain the element $ n $. Consider the following bijection between the minimal selfish subsets of $ T_n $ containing $ n $ and the minimal selfish subsets of $ T_{n-2} $:

$ P = \{a_1, a_2, \ldots, a_k, n\} \leftrightarrow Q = \{a_1-1, a_2-1, \ldots, a_k-1\} $.

Clearly $ a_i > 1 $ because otherwise $ \{a_i\} $ would be a selfish subset. Also, $ a_k \le n-1 $. Hence the mapping works both ways; it remains to show that if one is a minimal selfish subset, so is the other. Since $ |P| = |Q|+1 $, we know that both are selfish or neither of them is. Furthermore, if $ P $ contains a selfish subset $ \{b_1, b_2, \ldots, b_k\} $, $ Q $ will contain the selfish subset $ \{b_1-1, b_2-1, \ldots, b_{k-1}-1\} $ and if $ Q $ contains the selfish subset $ \{c_1, c_2, \ldots, c_k\} $ then $ P $ will contain the selfish subset $ \{c_1+1, c_2+1, \ldots, c_k+1, n\} $. So the bijection works.

That means we have $ S_n = S_{n-1}+S_{n-2} $ so $ S_n = F_n $, as desired. QED.

--------------------

Comment: Decently hard for a B1, at the least the part that involved rigorously proving that the mapping is actually a bijection. It was pretty easy to figure out what the answer was, but as usual much more difficult to prove it.

--------------------

Practice Problem: (1996 Putnam - A1) Find the least number $ A $ such that for any two squares of combined area $ 1 $, a rectangle of area $ A $ exists such that the two squares can be packed in the rectangle (without interior overlap). You may assume that the sides of the squares are parallel to the sides of the rectangle.

Friday, December 22, 2006

Leftovers. Topic: Algebra/Polynomials. Level: AIME.

Problem: (Stanford Putnam Practice) Find the remainder when you divide $ x^{81}+x^{49}+x^{25}+x^9+x $ by $ x^3-x $.

Solution: Since they're both divisible by $ x $, we first divide that out, and just remember to multiply the remainder by $ x $ at the end. Let $ xP(x) $ be the first polynomial and $ xQ(x) $ be the second. we have

$ P(x) = g(x)Q(x)+r(x) $

for some polynomials $ g(x), r(x) $ with $ \deg(r) < deg(Q) = 2 $. We want to find $ r(x) $. Consider the two roots of $ Q(x) = x^2-1 = (x+1)(x-1) $. Plugging them into the equation, we obtain

$ P(1) = r(1) $ and $ P(-1) = r(-1) $.

Evaluating $ P $ at those two values, we find that

$ r(1) = 5 $ and $ r(-1) = 5 $.

But since $ r $ has degree less than $ 2 $, the only possible $ r $ is the constant polynomial $ r(x) = 5 $. Then the remainder is $ xr(x) = 5x $. QED.

--------------------

Comment: This is a super important technique when it comes to polynomial division. Using the $ \deg(Q) $ roots of $ Q $ and the fact that $ deg(r) < deg(Q) $, we can hypothetically always determine $ r $ this way, without dividing. This usually comes up when $ Q $ has nice roots, so if it doesn't, look for a better way.

--------------------

Practice Problem: (Stanford Putnam Practice) How can the quadratic equation

$ \frac{(x-a)(x-b)}{(c-a)(c-b)}+\frac{(x-b)(x-c)}{(a-b)(a-c)}+\frac{(x-c)(x-a)}{(b-c)(b-a)} = 1 $

have three roots $ x = a, b, c $? [Reworded]

Wednesday, December 20, 2006

Strikethrough. Topic: Geometry/NT. Level: AIME.

Problem: (Stanford Putnam Practice) Let $ P_1, P_2, \ldots , P_{2007} $ be distinct points in the plane. Connect the points with the line segments $ P_1P_2, P_2P_3, \ldots, P_{2006}P_{2007}, P_{2007}P_1 $. Can one draw a line that passes through the interior of every one of these segments?

Solution: Playing around with pictures for a while, our intuition says that since $ 2007 $ is odd, this is impossible. Now let's try to prove it. Suppose there exists such a line $ l $. Then we know that the endpoints of each line segment lie on opposite sides of the line. So

$ P_1 $ and $ P_2 $ lie on opposite sides of $ l $;

$ P_2 $ and $ P_3 $ lie on opposite sides of $ l $;

$ \cdots $

$ P_{2007} $ and $ P_1 $ lie on opposite sides of $ l $.

But wait, by a simple induction we know that $ P_1, P_3, P_5, \ldots, P_{2007} $ lie on the same side. That gives us the contradiction we seek. Thus $ l $ does not exist, as desired. QED.

--------------------

Comment: Not a bad problem, though pretty simple after you realized that you could fix the line before the points. It's easy to see that this generalizes to any set of an odd number of points (try the sides of a triangle, for example), so we have proven in fact a decently strong result.

--------------------

Practice Problem: (Stanford Putnam Practice) Show that if every room in a house has an even number of doors, then the number of outside entrance doors must be even as well.

Tuesday, December 19, 2006

I Am The Greatest (Common Divisor)! Topic: Number Theory. Level: Olympiad.

Problem: (Stanford Putnam Practice) Suppose $ (a_i)_{i \ge 1} $ is a sequence of positive integers satisfying $ \gcd(a_i, a_j) = \gcd(i, j) $ for $ i \neq j $. Show that $ a_i = i $ for each $ i $.

Solution: Let $ n $ be an arbitrary positive integer. If we can show that $ a_n = n $ we will be done. We have

$ \gcd(a_n, a_{2n}) = \gcd(n, 2n) = n $

so $ n|a_n $. Then $ a_n = kn $ for some positive integer $ k $. Similarly, we know $ kn|a_{kn} $ so $ a_{kn} = ckn $ for some positive integer $ c $. Then, however,

$ \gcd(a_n, a_{kn}) = (kn, ckn) = kn $ and $ \gcd(n, kn) = n $

so we must have $ kn = n \Rightarrow k = 1 $. Thus $ a_n = n $. QED.

--------------------

Comment: At first, induction seemed the way to go, but after a while you realize that the induction step is, well, not very likely since $ a_1 = 1, a_2 = 2, \ldots, a_k = k $ puts very few restrictions on $ a_{k+1} $. So then you look to a better option, trying to find the strongest use of $ \gcd(i, j) $. It turns out that this happens when $ j $ is a multiple of $ i $, which is exactly how the above proof starts.

--------------------

Practice Problem: (Stanford Putnam Practice) Let $ p > 5 $ be a prime number. Show that $ p $ divides infinitely many of the numbers

$ 1, 11, 111, 1111, \ldots $.

[Reworded]

Sunday, December 17, 2006

Toast To Euclid. Topic: Number Theory. Level: AMC/AIME.

Introduction: (Euclid's Algorithm) Given two positive integers $ a, b $, we can determine $ \gcd(a,b) $ through the following algorithm. Let $ a = bq+r $, where $ q, r $ are positive integers such that $ 0 \le r < b $. If $ r = 0 $, then $ \gcd(a, b) = b $. Otherwise, let $ a = b $, $ b = r $, and repeat. For example, if we wanted to find $ \gcd(140, 98) $, we would do

$ 140 = 98 \cdot 1+42 $

$ 98 = 42 \cdot 2+14 $

$ 42 = 14 \cdot 3+0 $

so $ \gcd(140,98) = 14 $. A proof of this result is left to the reader.

--------------------

Problem: (Stanford Putnam Practice) Prove that consecutive Fibonacci numbers are always relatively prime.

Solution: Well, since we had a handy introduction to Euclid's Algorithm up there, let's try to use it. We want to calculate $ \gcd(F_{n+1}, F_n) $. Executing the algorithm, we get

$ F_{n+1} = F_n \cdot 1+F_{n-1} $

$ F_n = F_{n-1} \cdot 1+F_{n-2} $

$ F_{n-1} = F_{n-2} \cdot 1+F_{n-3} $.

Hmm, is there a pattern? I think so! We eventually get

$ F_3 = F_2 \cdot 1+F_1 $

$ F_2 = F_1 \cdot 2+0 $.

So $ \gcd(F_{n+1}, F_n) = F_1 = 1 $, as desired. QED.

--------------------

Comment: Euclid's Algorithm is an extremely powerful tool for computing greatest common divisors. It even comes in handy in computer science, where we can write a very simple recursive method to compute $ \gcd(a,b) $. For example

int gcd(int a, int b)
{
if(b > a) swap(a,b);
if(b == 0) return a;
a %= b;
return gcd(b, a);
}

Tuesday, December 12, 2006

Complexity. Topic: Complex Numbers/Polynomials. Level: AIME/Olympiad.

Problem: Let $ f(x) = x^{2004}+2x^{2003}+\cdots+2004x+2005 $. If $ z = \cos{\frac{\pi}{1003}}+i \sin{\frac{\pi}{1003}} $ and

$ N = f(z)f(z^2) \cdots f(z^{2005}) $,

find the remainder when $ N $ is divided $ 1000 $. [Reworded]

Solution: Well $ f $ is pretty ugly looking at the moment, let's fix that. We can write it as the sum of the following series:

$ 1+x+x^2+\cdots+x^{2004} = \frac{x^{2005}-1}{x-1} $

$ 1+x+x^2+\cdots+x^{2003} = \frac{x^{2004}-1}{x-1} $

$ \cdots $

$ 1+x = \frac{x^2-1}{x-1} $

$ 1 = \frac{x-1}{x-1} $,

since all of them are geometric series with common ratio $ x $. Summing these up, we have

$ f(x) = \frac{x^{2005}+x^{2004}+\cdots+x+1-2006}{x-1} $.

The top is again a geometric series with common ratio $ x $ (without that $ 2006 $ part) so we can write it ias

$ f(x) = \frac{\frac{x^{2006}-1}{x-1}-2006}{x-1} $.

But wait, if $ x = z^k $ then $ x^{2006} = (z^k)^{2006} = (z^{2006})^k = 1 $, so in fact we have

$ f(z^k) = \frac{2006}{1-z^k} $.

Then $ N = \frac{2006^{2005}}{(1-z)(1-z^2) \cdots (1-z^{2005}) $ so it remains the evaluate the denominator of that. Considering

$ g(x) = (x-z)(x-z^2) \cdots (x-z^{2005}) $

we are like whoa, $ g $ has the $ 2006 $th roots of unity for its roots except $ x = 1 $ and it has leading coefficient $ 1 $ so it must be

$ g(x) = x^{2005}+x^{2004}+\cdots+1 \Rightarrow g(1) = (1-z)(1-z^2) \cdots (1-z^{2005}) = 2006 $.

Hence

$ N = \frac{2006^{2005}}{g(1)} = 2006^{2004} $.

By Euler's Totient Theorem, $ N \equiv 6^4 \equiv 296 \pmod{1000} $. QED.

--------------------

Comment: It didn't take too much "insight" to reduce $ f $ to the nicer expression, more like just tedious evaluation of geometric series galore. Seeing the product in the denominator should have been pretty familiar (though I admit I forgot what it came out to at first) and a standard argument allowed you to evaluate it. Possibly the hardest part was Euler's Totient Theorem, which is invaluable to carry around for the AIME. Also knowing that $ \phi(1000) = 400 $.

--------------------

Practice Problem: Do there exist polynomials $ P(x) $ and $ Q(x) $ such that

$ (x^8-1)P(x)+(x^5-1)Q(x) = x-1 $?

If so, find them.

Sunday, December 10, 2006

Just... Right. Topic: Calculus/Inequalities. Level: AIME/Olympiad.

Problem: (Stanford Putnam Practice) Let $ f $ be continuous and monotonically increasing, with $ f(0) = 0 $ and $ f(1) = 1 $. Prove that

$ \displaystyle f(1/10)+f(2/10)+\cdots+f(9/10)+f^{-1}(1/10)+f^{-1}(2/10)+\cdots+f^{-1}(9/10) \le \frac{99}{10} $.

Solution: We can easily generalize this to

$ \displaystyle \sum_{k=1}^{n-1} [f(k/n)+f^{-1}(k/n)] \le \frac{n^2-1}{n} $,
so we will prove this instead. Consider the value of $ \displaystyle \int_0^1 [f(x)+f^{-1}(x)]dx $. This integrates across the area of the unit square with opposite vertices $ (0,0) $ and $ (1,1) $ so the integral is $ 1 $. Visualize this with a picture. Now we use a left-hand Riemann sum approximation on both $ f(x) $ and $ f^{-1}(x) $ with $ n-1 $ subdivisions from $ \frac{1}{n} $ to $ 1 $ to get

$ \displaystyle \int_0^{\frac{1}{n}} [f(x)+f^{-1}(x)]dx+\frac{1}{n}\sum_{k=1}^{n-1} [f(k/n)+f^{-1}(k/n)] \le \int_0^1 [f(x)+f^{-1}(x)]dx = 1 $.

Then it remains to show that

$ \displaystyle \int_0^{\frac{1}{n}} [f(x)+f^{-1}(x)]dx \ge \frac{1}{n^2} $.

But from the argument above we know covers at least the square with opposite vertices $ (0,0) $ and $ \left(\frac{1}{n},\frac{1}{n}\right) $, which has area $ \frac{1}{n^2} $. Again, draw a picture (it's the same as before except without the endpoint restriction). Hence we have shown that

$ \displaystyle \sum_{k=1}^{n-1} [f(k/n)+f^{-1}(k/n)] \le \frac{n^2-1}{n} $,

as desired. QED.

--------------------

Comment: This was a pretty interesting problem; it was pretty clear from the beginning that you were approximating some kind of interval and it looked a lot like Riemann sums so that seemed to be the method of choice. After that, a little knowledge about function inverses and a couple of nice pictures gave the rest of the solution.

--------------------

Practice Problem: (Stanford Putnam Practice) Let $ a_0 $ be an arbitrary positive integer and let

$ a_{n+1} = \frac{1}{a_n}+1 $ for $ n \ge 0 $.

Determine if $ \lim_{n \rightarrow \infty} a_n $ exists and if so, evaluate it. [Reworded]

Saturday, December 9, 2006

eek! Topic: Calculus/S&S. Level: AIME/Olympiad.

Problem: (Problem-Solving Through Problems - 5.4.15) Let $ f_0(x) = e^x $ and $ f_{(n+1)}(x) = xf^{\prime}(x) $ for $ n = 0, 1, 2, \ldots $. Show that

$ \displaystyle \sum_{n=0}^{\infty} \frac{f_n(1)}{n!} = e^e $.

Solution: Well, the LHS looks suspiciously like a Taylor series, so maybe we can find a function. A good choice is $ g(x) = e^{e^x} $. Notice that $ f_0(x) = g(\ln{x}) $. Testing a few derivatives, we hypothesize that

$ f_n(x) = g^{(n)}(\ln{x}) $.

By induction, we easily have

$ \frac{d}{dx}[g^{(n)}(\ln{x})] = g^{(n+1)}(\ln{x}) \cdot \frac{1}{x} $,

which exactly satisfies

$ g^{(n+1)}(\ln{x}) = x \frac{d}{dx}[g^{(n)}(\ln{x})] $

so we indeed have $ f_n(x) = g^{(n)}(\ln{x}) $. Then $ f_n(1) = g^{(n)}(0) $ and the Taylor series of $ g $ centered at zero is

$ \displaystyle \sum_{n=0}^{\infty} \frac{g^{(n)}(0)x^n}{n!} = \sum_{n=0}^{\infty} \frac{f_n(1)x^n}{n!} $.

Hence our desired sum is the above series evaluated at $ x = 1 $, which is simply

$ g(1) = e^{e^{1}} = e^e $.

QED.

--------------------

Comment: A neat problem resulting from Taylor series. They are useful in all sorts of ways, especially evaluating other series. If we can reduce a given series to the Taylor series of a function like we did in this problem, evaluating it becomes plugging in a point.

--------------------

Practice Problem: (Problem-Solving Through Problems - 5.4.17) Show that the functional equation

$ \displaystyle f\left(\frac{2x}{1+x^2}\right) = (1+x^2)f(x) $

is satisfied by

$ f(x) = 1+\frac{1}{3}x^2+\frac{1}{5}x^4+\frac{1}{7}x^6+\cdots $ with $ |x| < 1 $.

Wednesday, December 6, 2006

Digit To The Unit. Topic: Algebra. Level: AIME.

Problem: (2006-2007 Warm-Up 5 - #13) Determine the units digit in the decimal expansion of $ (20+\sqrt{397})^{2674} $.

Solution: Well, in its current form it is quite an ugly expression, with half of the terms involving a radical. Maybe we can simplify this, a.k.a. get rid of the radicals. Consider

$ (20+\sqrt{397})^{2674}+(20-\sqrt{397})^{2674} $.

Since we know $ 20-\sqrt{397} < 1 $, then $ (20-\sqrt{397})^{2674} \rightarrow 0 $ (it's really small). The above expression is an integer because all of the radical terms cancel out. So if we find the units digit of this, we simply subtract one away and get the units digit of our original number. But this number is just

$ 2(20^{2674}+\cdots+397^{1337}) $,

where every term in between is divisible by $ 20 $. That means they all have a units digit of zero, so we only need to look at the last term. Since powers of $ 7 $ repeat the sequence

$ 7, 9, 3, 1 \pmod{10} $,

we know $ 397^{1337} \equiv 7^{1337} \equiv 7 \pmod{10} $. So twice of this would give a units digit of $ 4 $. Subtracting one away as we mentioned above gives us a units digit of $ 3 $. QED.

--------------------

Comment: This is an excellent application of the binomial theorem and a good test of your intuition, which basically consisted of noticing $ 20^2 \approx 397 $. The power $ 2674 $ was an arbitrary even number; in fact, for smaller powers we notice the exact same thing:

$ (20+\sqrt{397})^2 \approx 1593.99 $.

--------------------

Practice Problem: Determine how many elements of the $ n $th row of Pascal's Triangle are odd.

Tuesday, December 5, 2006

2006 Math Is Cool Tests.

Practice Tests:

2006 Math Is Cool

NOTE: Thanks a lot to Jacob for scanning the tests and getting them to me (a long time ago)! Also, if you want to look at them, try to download them to your own computer; having to load them every time is no fun.

Monday, December 4, 2006

Yay! Putnam! Part 2. Topic: Sets. Level: AIME/Olympiad.

Problem: (2006 Putnam - B2) Prove that, for every set $ X = \{x_1, x_2, \ldots, x_n\} $ of $ n $ real numbers, there exists a non-empty subset $ S $ of $ X $ and an integer $ m $ such that

$ \displaystyle \left|m+\sum_{s \in S} s\right| \le \frac{1}{n+1} $.

Solution: Essentially, it's telling us to find a subset such that the sum of the elements has fractional part $ \le \frac{1}{n+1} $ or $ \ge \frac{n}{n+1} $. Consider the partial sums of $ X $, i.e.

$ t_1 = x_1 $

$ t_2 = x_1+x_2 $

$ \cdots $

$ t_n = x_1+x_2+\cdots+x_n $.

Now partition the interval $ [0,1) $ into $ n+1 $ intervals of length $ \frac{1}{n+1} $, like so:

$ \left[0, \frac{1}{n+1}\right]; \left(\frac{1}{n+1}, \frac{2}{n+1}\right); \left[\frac{2}{n+1}, \frac{3}{n+1}\right); \cdots ;\left[\frac{n-1}{n+1}, \frac{n}{n+1}\right); \left[\frac{n}{n+1}, 1) $.

Let $ \{t_i\} $ denote the fractional part of $ t_i $. Clearly if $ \{t_i\} \in \left[0, \frac{1}{n+1}\right] $ or $ \{t_i\} \in \left[\frac{n}{n+1}, 1\right) $, the problem is solved because we simply choose $ t_i $ and the closest integer to it. If, on the other hand, none of the $ \{t_i\} $ fall into these sets, they must all be in the intervals

$ \left(\frac{1}{n+1}, \frac{2}{n+1}\right); \left[\frac{2}{n+1}, \frac{3}{n+1}\right); \cdots ;\left[\frac{n-1}{n+1}, \frac{n}{n+1}\right) $.

Since there are exactly $ n-1 $ of these and we have $ n $ $ \{t_i\} $'s, by the Pigeonhole Principle we know two such $ \{t_i\} $ fall into the same interval, say $ \{t_j\} $ and $ \{t_k\} $ with $ j < k $. Then we choose the set $ S = \{x_{j+1}, x_{j+2}, \ldots, x_k\} $. The sum of the elements is

$ \displaystyle x_{j+1}+x_{j+2}+\cdots+x_k = t_k-t_j $,

which satisfies

$ |\{t_k\}-\{t_j\}| \le \frac{1}{n+1} $.

This however, means that

$ \{t_k-t_j\} \le \frac{1}{n+1} $ or $ \{t_k-t_j\} \ge \frac{n}{n+1} $,

meaning that the set $ S = \{x_{j+1}, x_{j+2}, \ldots, x_k\} $ satisfies the desired property by choosing the appropriate $ m $, which is the closest integer to the sum. QED.

--------------------

Comment: This was a pretty standard pre-olympiad level pigeonhole problem, though it had some interesting notation which I suspect threw some people off. Plus the fact that the property we are to prove is quite weak in that we only have to look at sets of consecutive elements. In any case, it was fun though a bit on the simple side.

--------------------

Practice Problem: (2006 Putnam - B5) For each continuous function $ f: [0,1] \rightarrow \mathbb{R} $, let $ I(f) = \int_0^1 x^2f(x)dx $ and $ J(f) = \int_0^1 x[f(x)]^2dx $. Find the maximum value of $ I(f)-J(f) $ over all such functions $ f $.

Sunday, December 3, 2006

Yay! Putnam! Topic: Algebra/Polynomials. Level: AIME.

Problem: (2006 Putnam - B1) Show that the curve $x^{3}+3xy+y^{3}=1$ contains only one set of three distinct points, $A$, $B$, and $C$, which are the vertices of an equilateral triangle, and find its area.

Solution: Well, this is a cubic in two variables, but let's remember the awesome technique of writing it as a polynomial in just one variable, say $ x $. Then

$ x^3+3xy+(y^3-1) = 0 $.

Well, testing out a bit (looking at the factorization of $ y^3-1 $), we get $ x = 1-y $ is always a solution, so factor it out to get

$ (x+y-1)[x^2+x(1-y)+(y^2+y+1)] = 0 $.

Looking at the second term, we're like let's hope it has not many solutions, so we take the discriminant and find

$ (1-y)^2-4(y^2+y+1) = -3y^2-6y-3 = -3(y+1)^2 $.

But this is always negative unless $ y = -1 $ so that means this is the only case in which this factor can be zero. This gives us the point $ (-1,-1) $ as a solution. Whoa, that means we have categorized the entire solution set:

(1) the line $ x+y = 1 $ and (2) the point $ (-1,-1) $.

If we have three vertices of an equilateral triangle, they most definitely can't be collinear, so one point must be $ (-1,-1) $. Suppose we choose two points on the line $ x+y = 1 $, say $ (x_0, 1-x_0) $ and $ (x_1, 1-x_1) $. Since they have to be equidistant from $ (-1, -1) $, we know one must be the reflection of the other over the line through $ (-1, -1) $ perpendicular to $ x+y = 1 $, which is $ x = y $. So if the first point is $ (x_0, 1-x_0) $ then the other is $ (1-x_0, x_0) $.

Now setting the squares of the side lengths equal to each other, we know

$ 2(2x_0-1)^2 = (x_0+1)^2+(2-x_0)^2 $

$ 2x_0^2-2x_0-1 = 0 $ so $ x_0 = \frac{1}{2} \pm \frac{\sqrt{3}}{2} $.

This gives the $ x $-coordinate of both the other two vertices of the triangle, so the only one is the equilateral triangle with vertices

$ (-1, -1); \left(\frac{1}{2}+\frac{\sqrt{3}}{2}, \frac{1}{2}-\frac{\sqrt{3}}{2}\right); \left(\frac{1}{2}-\frac{\sqrt{3}}{2}, \frac{1}{2}+\frac{\sqrt{3}}{2}\right) $.

The side length is $ \sqrt{6} $, so the area is $ \frac{6\sqrt{3}}{4} = \frac{3\sqrt{3}}{2} $. QED.

--------------------

Comment: Slightly difficult if your algebraic intuition wasn't working well, but after you realized the factorization it wasn't hard to convince yourself that a line and a point can have the vertices of at most one equilateral triangle. A solid B1 problem on the Putnam, quite a bit more difficult than the A1, imo.

--------------------

Practice Problem: (2006 Putnam - A1) Find the volume of the region of points $ (x,y,z) $ such that

$ (x^2+y^2+z^2+8)^2 \le 36(x^2+y^2) $.

Saturday, December 2, 2006

Seriously? No, Seriesly. Topic: Calculus/S&S. Level: AIME/Olympiad.

Problem: (Problem-Solving Through Problems - 6.9.11) Sum the series

$ 1+\frac{1}{3}-\frac{1}{5}-\frac{1}{7}+\frac{1}{9}+\frac{1}{11}-\frac{1}{13}-\cdots $.

Solution: Consider the function

$ f(x) = x+\frac{x^3}{3}-\frac{x^5}{5}-\frac{x^7}{7}+\frac{x^9}{9}+\frac{x^{11}}{11}-\cdots $,

where $ f(1) $ is the series in question. We will derive a closed form for $ f $ whenever $ |x| \le 1 $, since this guarantees convergence (alternating series test works here). Taking the derivative, we get

$ f^{\prime}(x) = (1+x^2)-(x^4+x^6)+(x^8+x^{10})-\cdots $,

which is clearly a geometric series with first term $ 1+x^2 $ and common ratio $ -x^4 $. Hence we can write $ f^{\prime} $ as

$ f^{\prime}(x) = \frac{1+x^2}{1+x^4} $.

Now we want to integrate so solve for $ f $, but this requires some partial fraction decomposition (ew!). I'll leave out the details, just notice that

$ \frac{1+x^2}{1+x^4} = \frac{1}{1+(x\sqrt{2}+1)^2}+\frac{1}{1+(x\sqrt{2}-1)^2} $,

so integrating we get

$ \displaystyle f(x) = \int_0^x \left(\frac{1}{1+(t\sqrt{2}+1)^2}+\frac{1}{1+(t\sqrt{2}-1)^2}\right)dt = \frac{1}{\sqrt{2}}\left(\arctan{(x\sqrt{2}+1)}+\arctan{(x\sqrt{2}-1)}\right) $.

Now it remains to plug in $ x = 1 $ to get

$ f(1) = \frac{1}{\sqrt{2}}\left(\arctan{(\sqrt{2}+1)}+\arctan{(\sqrt{2}-1)}\right) $.

Oh, no! What now? Well, we see that $ \frac{1}{\sqrt{2}+1} = \sqrt{2}-1 $, i.e. the arguments are reciprocals. But we know that $ \arctan{(z)} = \frac{\pi}{2}-\arctan{\left(\frac{1}{z}\right)} $ (think about a right triangle and the tangent from each of the angles). So

$ \frac{1}{\sqrt{2}}\left(\arctan{(\sqrt{2}+1)}+\arctan{(\sqrt{2}-1)}\right) = \frac{\pi}{2\sqrt{2}} \approx 1.11072 $.

QED.

--------------------

Comment: This technique is really neat for evaluating weird harmonic, alternating infinite series. Of course it doesn't always work, but I suspect that there are many problems that you can convert into something of this type and solve by differentiating and integrating.

--------------------

Practice Problem: (Problem-Solving Through Problems - 6.9.6) Suppose that $ f:[0,1] \rightarrow \mathbb{R} $ has a continuous second derivative, that $ f(0) = 0 = f(1) $, and that $ f(x) > 0 $ for all $ x $ in $ (0,1) $. Show that

$ \displaystyle \int_0^1 \left|\frac{f^{\prime\prime}(x)}{f(x)}\right|dx > 4 $.

Friday, December 1, 2006

Integra-what? Topic: Calculus/S&S.

Problem: Write $ I = \displaystyle \sum_{l_1=1}^{\infty} \sum_{l_2=1}^{\infty} \frac{(-1)^{l_1-1}}{l_1l_2(l_1+l_2)} $ as an integral.

Solution: We first notice that

$ \displaystyle \frac{(-1)^{l_1-1}}{l_1l_2(l_1+l_2)} = \frac{1}{l_1l_2} \int_0^1 (-x)^{l_1-1}x^{l_2} dx $.

So our summation becomes

$ \displaystyle I = \sum_{l_1=1}^{\infty} \sum_{l_2=1}^{\infty} \frac{1}{l_1l_2} \int_0^1 (-x)^{l_1-1}x^{l_2} dx $.

But we can switch the order of the integral with the two sums because $ \int (f+g) = \int f+\int g $. Hence we obtain

$ \displaystyle I = \int_0^1 \sum_{l_1=1}^{\infty} \left(\frac{(-x)^{l_1-1}}{l_1}\sum_{l_2=1}^{\infty} \frac{x^{l_2}}{l_2} \right) $.

We remember, however, that the Taylor series expansion of $ \displaystyle \ln{(1-x)} = -\sum_{n=1}^{\infty} \frac{x^n}{n} $, so substituting we have

$ \displaystyle I = -\int_0^1 \ln{(1-x)}\sum_{l_1=1}^{\infty} \frac{(-x)^{l_1-1}}{l_1} $.

Similarly, $ \displaystyle \ln{(1+x)} = -\sum_{n=1}^{\infty} \frac{(-x)^n}{n} $, so we replace one more time to get

$ \displaystyle I = -\int_0^1 \frac{1}{x} \ln{(1-x)} \ln{(1+x)} dx $.

QED.

--------------------

Comment: Transforming sums into integrals and vice versa are powerful techniques for evaluating these expressions. The numerical value turns out to be $ I = \frac{5}{8} \zeta(3) $, but we'll save that for another time...

--------------------

Practice Problem: (1978 Putnam - B2) Evaluate

$ \displaystyle \sum_{m=1}^{\infty} \sum_{n=1}^{\infty} \frac{1}{m^2n+n^2m+2mn} $.

Thursday, November 30, 2006

Exponents Away. Topic: Algebra/Calculus. Level: AIME/Olympiad.

Problem: (Problem-Solving Through Problems - 6.5.12) Suppose $ n $ is a nonnegative integer and

$ f_n(x) = c_0e^{r_0x}+c_1e^{r_1x}+\cdots+c_ne^{r_nx} $,

where $ c_i $ and $ r_i $ are real numbers. Prove that if $ f $ has more than $ n $ zeros in $ \mathbb{R} $, then $ f(x) \equiv 0 $.

Solution: First, we add the claim that $ f_n(x) = 1 $ can occur for at most $ n+1 $ distinct $ x $.

We proceed by induction. For $ n = 0 $, we have $ f_0(x) = c_0e^{r_0x} $ which we know is not zero unless $ c_0 = 0 $, so if it has a root it must be identically $ 0 $. It also takes the value $ 1 $ at most once because it is monotonic. Now suppose both hypotheses are true for some nonnegative integer $ k $. We want to show that

$ f_{k+1}(x) = c_0e^{r_0x}+c_1e^{r_1x}+\cdots+c_ke^{r_kx}+c_{k+1}e^{r_{k+1}x} $

has at most $ k+1 $ zeros (except when $ c_i = 0 $ for all $ i $) and takes the value $ 1 $ at most $ k+2 $ times. If $ c_{k+1} = 0 $, we apply the inductive hypothesis and it is trivially true, so assume $ c_{k+1} \neq 0 $. From $ f_{k+1}(x) = 0 $, we get

$ c_0e^{r_0x}+c_1e^{r_1x}+\cdots+c_ke^{r_kx} = -c_{k+1}e^{r_{k+1}x} $.

Letting $ b_i = -\frac{c_i}{c_{k+1}} $ and $ q_i = r_i-r_{k+1} $ for $ i = 0, 1, \ldots, k $, we obtain

$ b_0e^{r_0x}+b_1e^{r_1x}+\cdots+b_ke^{r_kx} = 1 $.

By our inductive hypothesis, we know this occurs at most $ k+1 $ times, proving that $ f_{k+1} $ has at most $ k+1 $ zeros or is identically $ 0 $.

Now we want to show that $ f_{k+1}(x) = 1 $ at most $ k+2 $ times. Suppose $ f_{k+1}(x) = 1 $ for $ x_1 < x_2 < \ldots < x_m $ for $ m > k+2 $. Then, by Rolle's Theorem, we have

$ f_{k+1}^{\prime}(x) = c_0r_0e^{r_0x}+c_1r_1e^{r_1x}+\cdots+c_kr_ke^{r_kx}+c_{k+1}r_{k+1}e^{r_{k+1}x} = 0 $

between each consecutive pair of $ x_i $'s, giving us $ m-1 > k+1 $ pairs. But by what we just proved for $ f_{k+1} $ having at most $ k+1 $ zeros, we know $ f_{k+1}^{\prime}(x) = 0 $ at most $ k+1 $ times, since $ f_{k+1}^{\prime} $ is in the same family of functions. Hence $ f_{k+1}(x) = 1 $ for at most $ k+2 $ distinct $ x $, completing the induction.

Therefore, we conclude that $ f_n(x) $ has at most $ n $ zeros or is identically $ 0 $.

--------------------

Comment: A pretty tough problem, took a couple of looks at smaller cases to determine the method with which to solve it. The key is looking for the inductive step, somehow reducing the $ k+1 $ case to the $ k $ case. Adding in the extra condition might not have been necessary but it made it easier to move from case to case. Here's a "flow chart" of what proves what.

$ f_k(x) = 0 $ for at most $ k $ values or is identically $ 0 $ --> $ f_k(x) = 1 $ for at most $ k+1 $ values --> $ f_{k+1}(x) = 0 $ for at most $ k+1 $ values or is identically $ 0 $ --> $ f_{k+1}(x) = 1 $ for at most $ k+2 $ values.

--------------------

Practice Problem: Show that $ x^3-3x+b $ cannot have more than one zero in $ [-1,1] $, regardless of the value of $ b $.

Wednesday, November 29, 2006

Balance Those p's. Topic: Algebra/Polynomials/S&S. Level: Olympiad.

Problem: (Stanford Putnam Practice) A finite sequence $ a_1, a_2, \ldots, a_n $ is called $ p $-balanced if any sum of the form $ a_k+a_{k+p}+a_{k+2p}+\cdots $ is the same for $ k = 1, 2, \ldots, p $. Prove that if a sequence with $ 50 $ members is $ p $-balanced for $ p = 3, 5, 7, 11, 13, 17 $, then all its members are equal to zero.

Solution: Let $ P(x) = a_{50}x^{49}+a_{49}x^{48}+\cdots+a_2x+a_1 $. Recall the strategy of isolating certain terms in a polynomial. Denoting $ \omega_p $ by the $ p $th root of unity, we have

$ \displaystyle a_1+a_{1+p}+a_{1+2p}+\cdots = \frac{1}{p} \sum_{i=0}^{p-1} P(\omega_p^i) $

because all of the terms will cancel out by the identity $ 1+\omega_p+\omega_p^2+\cdots+\omega_p^{p-1} = 0 $ except for the terms in which the power of the exponent is divisible by $ p $, in which case $ \omega_p^p = 1 $. Similarly, using the polynomials $ xP(x), x^2P(x), \ldots, x^{p-1}P(x) $ we obtain

$ \displaystyle a_k+a_{k+p}+a_{k+2p}+\cdots = \frac{1}{p} \sum_{i=0}^{p-1} \omega_p^{i(p+1-k)}P(\omega_p^i) $

from $ x^{p+1-k}P(x) $ and $ k = 2, 3, \ldots, p $. Hence we have a system of $ p $ equations in the variables

$ P(1), P(\omega_p), \ldots, P(\omega_p^{p-1}) $.

But if we have a system of $ p $ equations for $ p = 3, 5, 7, 11, 13, 17 $, we have a total of $ 3+5+7+11+13+17 = 56 $ equations. Since the only repeated value in the polynomial was $ 1 $ and it showed up $ 6 $ times, once for each prime, there are $ 56-5 = 51 $ distinct points on the polynomial that we now know. However, since a polynomial of degree $ 49 $ is defined by at most $ 50 $ points, it must be the zero polynomial, which means $ a_1 = a_2 = \cdots = a_{50} = 0 $, as desired. QED.

--------------------

Comment: This is an example of generating functions showing up when you least expect it, well not exactly, but it was cool anyway. The proof above is not completely rigorous because we would need to show that a polynomial of degree $ 49 $ actually cannot pass through all $ 51 $ of the points, but it's almost there.

--------------------

Practice Problem: Let $ p(n, k) $ denote the number of ways you can partition $ n $ into $ k $ parts. Show that

$ \displaystyle \sum_{k=1}^n p(n, k) = p(2n, n) $.

Also show that the number of ways $ n $ can be partitioned into unequal parts is equal to the number of ways $ n $ can be partitioned into odd parts.

Tuesday, November 28, 2006

Divides Both! Topic: NT/S&S. Level: AIME.

Problem: (Stanford Putnam Practice) Let $ a_0 = 1 $ and $ a_{n+1} = a_n^2+1 $ for $ n \ge 0 $. Find $ \gcd(a_{2004},a_{999}) $.

Solution: Re-label starting with $ a_0 = 0 $, $ a_1 = 1 $ instead. We will prove a much stronger statement, that, $ \gcd(a_m, a_k) = a_{gcd(m,k)} $ for $ m, k \neq 0 $.

First, we will show a lemma that if $ q | a_r $ for positive integers $ r, q $ then $ a_{r+s} \equiv a_s \pmod{q} $ for $ s \ge 0 $. This is trivial by induction, since $ a_r \equiv a_0 \pmod{q} $, $ a_{r+s+1} \equiv a_{r+s}^2+1 \pmod{q} $, and $ a_{s+1} \equiv a_s^2+1 \pmod{q} $.

We will prove the final result by induction as well, on $ \max(m,k) $. When this is $ 1 $, we clearly have $ (a_1, a_1) = 1 = a_{\gcd(1,1)} $.

Assume the property is true for $ \max(m,k) \le p $. Now suppose $ \beta $ divides $ \gcd(a_m,a_{p+1}) $ with $ m < p+1 $. We must have $ a_m \equiv a_{p+1} \equiv 0 \pmod{\beta} $. But then by the lemma above we have $ \beta | a_m \Rightarrow a_{p+1} \equiv a_{p+1-m} \equiv 0 \pmod{\beta} $. So then

$ \gcd(a_m,a_{p+1}) = \gcd(a_m,a_{p+1-m}) $

with $ p+1-m \le p $. But by our inductive hypothesis, we know that

$ \gcd(a_m,a_{p+1-m}) = a_{\gcd(m,p+1-m)} = a_{\gcd(m,p+1)} = \gcd(a_m,a_{p+1}) $, as desired. Shifting the indices in the problem statement, we have

$ \gcd(a_{2005}, a_{1000}) = a_{\gcd(2005,1000)} = a_5 = 677 $. QED.

--------------------

Comment: Pretty classic as far as GCD of terms in a sequence go. It follows a similar property that the Fibonnacci sequence also has. Instead of using induction on the last step, another option was to invoke Euclid's Algorithm to arrive at $ \gcd(m,p+1) $ but that didn't seem quite as fulfilling.

--------------------

Practice Problem: (Stanford Putnam Practice) Define a sequence $ \{a_i\} $ by $ a_1 = 3 $ and $ a_{i+1} = 3^{a_i} $ for $ i \ge 1 $. Which integers between $ 00 $ and $ 99 $ occur as the last two digits of infinitely many $ a_i $?

Monday, November 27, 2006

To Converge Or Not To Converge. Topic: Real Analysis/S&S.

Problem: (Stanford Math 51H, Cauchy Root Test) Suppose there exists a $ \lambda \in (0,1) $ and an integer $ N \ge 1 $ such that $ |a_n|^{\frac{1}{n}} \le \lambda $ for all $ n \ge N $. Prove that $ \displaystyle \sum_{n=1}^{\infty} a_n $ converges.

Solution: Well let's just discard $ \displaystyle \sum_{n=1}^{N-1} a_n $ because it is finite and obviously converges. It remains to show that $ \displaystyle \sum_{n=N}^{\infty} a_n $ converges.

But then we have $ |a_n|^{\frac{1}{n}} \le \lambda \Rightarrow |a_n| \le \lambda^n $. So

$ \displaystyle \sum_{n=N}^{\infty} a_n \le \sum_{n=N}^{\infty} |a_n| \le \sum_{n=N}^{\infty} \lambda^n $.

The last summation, however, is a geometric series with common ratio $ 0 < \lambda < 1 $, so it converges. Hence our sum does as well. QED.

--------------------

Comment: The root test is, in effect, a comparison to a geometric series. The hypothesis is that we can bound $ |a_n| $ by a geometric series for all large $ n $, implying convergence.

--------------------

Practice Problem: (Stanford Math 51H) Discuss the convergence of

$ \displaystyle \sum_{n=1}^{\infty} \frac{\sin{\frac{1}{n}}}{n} $.

Saturday, November 25, 2006

Stirling Silver. Topic: Calculus.

Problem: Evaluate $ \displaystyle \lim_{n \rightarrow \infty} \frac{k^n \cdot n!}{n^n} $ where $ k $ is an arbitrary positive real number.

Solution: Well in its current form the limit does not look very fun, so let's take the natural log of it.

$ \displaystyle \ln{L_k} = \lim_{n \rightarrow \infty} \left(n \ln{k}+\sum_{i=1}^n \ln{i}-n\ln{n}\right) $

where $ L_k $ is the limit in question. Let's bound the summation term on the RHS using integrals. We see that

$ \displaystyle \int_1^n \ln{x} dx \le \sum_{i=1}^n \ln{i} \le \int_1^n \ln{x}dx + \ln{n} $

$ \displaystyle n\ln{n}-n \le \sum_{i=1}^n \ln{i} \le n\ln{n}-n+\ln{n} $

because $ \displaystyle \int \ln{x} dx = x\ln{x}-x $. Plugging this back into the equation, we get

$ \displaystyle \lim_{n \rightarrow \infty} n(\ln{k}-1) \le \ln{L_k} \le \lim_{n \rightarrow \infty} n(\ln{k}-1)+\ln{n} $.

For $ k < e $, the coefficient of the $ n $ term is negative and it dominates the $ \ln{n} $ term, so as $ n \rightarrow \infty $ both the upper and lower bounds approach $ - \infty $, hence $ \ln{L_k} \rightarrow -\infty \Rightarrow L_k \rightarrow 0 $.

For $ k > e $, the coefficient on the $ n $ term is positive, so both the upper and lower bounds approach $ \infty $ so $ \ln{L_k} \rightarrow \infty \Rightarrow L_k \rightarrow \infty $.

Now for the trickier case $ k = e $. Consider a midpoint approximation of the area under the curve $ \ln{x} $ from $ \frac{1}{2} $ to $ n+\frac{1}{2} $. Since $ \ln{x} $ is concave, the midpoint approximation will be larger than the integral (draw a picture to see this). So

$ \displaystyle \int_{\frac{1}{2}}^{n+\frac{1}{2}} \ln{x}dx \le \sum_{i=1}^n \ln{i} $.

But the LHS turns out to be

$ \left(n+\frac{1}{2}\right)\ln{\left(n+\frac{1}{2}\right)}-\left(n+\frac{1}{2}\right)-\frac{1}{2}\ln{\frac{1}{2}}+\frac{1}{2} $

which we can bound below by

$ n\ln{n}+\frac{1}{2}\ln{n}-n+C $

for some arbitrary constant $ C $. Then

$ \displaystyle \sum_{i=1}^n \ln{i} \ge n\ln{n}+\frac{1}{2}\ln{n}-n+C $.

Plugging back into our original expression for $ \ln{L_k} $, we obtain

$ \ln{L_e} \ge \lim_{n\rightarrow \infty} (n+n\ln{n}+\frac{1}{2}\ln{n}-n+C-n\ln{n}) = \lim_{n\rightarrow \infty} (\frac{1}{2}\ln{n}+C) = \infty $

so $ L_e \rightarrow \infty $ as well. Hence if $ k < e $, the limit converges to zero and if $ k \ge e $ the limit diverges to $ \infty $. QED.

--------------------

Comment: This limit it very interesting because it gives us an idea about Stirling's approximation. In fact, from the last inequality, we see that $ L_e \sim \sqrt{n} $ and we have

$ n! \sim \left(\frac{n}{e}\right)^n \cdot \sqrt{n} $.

A very good approximation is

$ n! \approx \sqrt{\pi \cdot \left(2n+\frac{1}{3}\right)} \cdot \left(\frac{n}{e}\right)^n $.

Thursday, November 23, 2006

Limitten. Topic: Real Analysis.

Definition: A sequence $ \{a_n\} $ has limit $ L $, where $ L $ is a given real number, if for each $ \epsilon > 0 $ there is an integer $ N \ge 1 $ such that

$ |a_n-L| < \epsilon $ for all integers $ n \ge N $.

--------------------

Problem: (Stanford Math 51H) Prove that a sequence $ \{a_n\} $ cannot have more than one limit.

Solution: This is logically true, as usual, but a rigorous argument is much more fun.

Suppose $ \lim a_n = L_1 $ and $ \lim a_n = L_2 $. Letting $ \epsilon = \frac{|L_1-L_2|}{2} $, we know there exists $ N_1, N_2 $ such that

$ |a_n-L_1| < \epsilon $ for $ n \ge N_1 $

$ |a_n-L_2| < \epsilon $ for $ n \ge N_2 $.

So the above two inequalities are true for $ n \ge \max(N_1, N_2) $. Adding the two inequalities together, we get

$ |a_n-L_1|+|a_n-L_2| < 2\epsilon = |L_1-L_2| $.

However, by the triangle inequality we know $ |a|+|b| \ge |a-b| $ (by setting $ b = -b $ is the usual one), so

$ |a_n-L_1|+|a_n-L_2| \ge |L_2-L_1| = |L_1-L_2| $.

Then

$ |L_1-L_2| < |L_1-L_2| $,

a contradiction. Hence $ \{a_n\} $ can have at most one limit. QED.

--------------------

Comment: The real definition of the limit is almost always complete overlooked in a regular calculus course (i.e. Calc AB and BC). But it's pretty much the foundation of all of calculus so it is really quite nice to know and work with.

--------------------

Practice Problem: (Stanford Math 51H, Sandwich Theorem) If $ \{a_n\}, \{b_n\} $ are given convergent sequences with $ \lim a_n = \lim b_n $, and if $ \{c_n\} $ is any sequence such that $ a_n \le c_n \le b_n \forall n \ge 1 $, prove that $ \{c_n\} $ is convergent and $ \lim c_n = \lim a_n = \lim b_n $.

Wednesday, November 22, 2006

Root Beer. Topic: Real Analysis.

Problem: (Stanford Math 51H) Prove that every positive real number has a positive square root (That is, for any $ b > 0 $, prove that there is a real number $ \alpha $ such that $ \alpha^2 = b $). [Usual properties of the integers are assumed.]

Solution: Consider the set $ S = \{x \in \mathbb{R}: x > 0, x^2 < b\} $. We can show that $ S $ is non-empty by selecting an integer $ n $ large enough such that $ \frac{1}{n^2} < b $. Since $ b $ is a real and the integers are unbounded, there exists a positive integer $ k $ such that $ k^2 > b $, thus $ x^2 < k^2 \Rightarrow x < k $ so $ S $ is bounded from above.

Now there must exist $ \alpha $ such that $ \sup S = \alpha $. We claim that $ \alpha^2 = b $. Suppose the contrary; then either $ \alpha^2 < b $ or $ \alpha^2 > b $.

CASE 1: If $ \alpha^2 < b $, then consider $ \left(\alpha+\frac{1}{n}\right)^2 = \alpha^2+\frac{2\alpha}{n}+\frac{1}{n^2} $. Choose $ n $ large enough such that

$ \alpha^2+\frac{2\alpha}{n}+\frac{1}{n^2} < b $

which is definitely true for any $ n > \frac{2\alpha+1}{b-\alpha^2} $. But then $ \alpha+\frac{1}{n} \in S $ and $ \alpha+\frac{1}{n} > \alpha = \sup S $, contradicting the fact that $ \alpha $ is the supremum.

CASE 2: If $ \alpha^2 > b $, then consider $ \left(\alpha-\frac{1}{n}\right)^2 = \alpha^2-\frac{2\alpha}{n}+\frac{1}{n^2} $. Again, choose $ n $ large enough such that

$ \alpha^2-\frac{2\alpha}{n}+\frac{1}{n^2} > b $

which we know is true for $ n > \frac{2\alpha}{\alpha^2-b} $ (furthermore, we impose the restriction $ \alpha > \frac{1}{n} $ so our resulting real is positive). Then $ x \le \alpha-\frac{1}{n} $ for all $ x \in S $ and $ \alpha-\frac{1}{n} < \alpha = \sup S $, contradicting the fact that $ \alpha $ is the supremum.

Hence we know that $ \alpha^2 = b $, or that the square root of any positive real number exists and is a positive real number. QED.

--------------------

Comment: This is from the real analysis portion of a freshman honors calculus course, i.e. a rigorous treatment of the real numbers which is the basis for calculus like limits and stuff. Really understanding calculus involves really understanding how the real number system works.

--------------------

Practice Problem: (Stanford Math 51H) If $ a, b \in \mathbb{R} $ with $ a < b $, prove:

(a) There is a rational $ r \in (a,b) $.
(b) There is an irrational $ c \in (a,b) $.
(c) $ (a,b) $ contains infinitely many rationals and infinitely many irrationals.

Monday, November 20, 2006

Function After Function. Topic: Algebra/Calculus. Level: AIME.

Problem: (1985 Putnam - B2) Define polynomials $ f_n(x) $ for $ n \ge 0 $ by $ f_0(x) = 1 $, $ f_n(0) = 0 $ for $ n \ge 1 $, and

$ \frac{d}{dx} f_{n+1}(x) = (n+1)f_n(x+1) $

for $ n \ge 0 $. Find, with proof, the explicit factorization of $ f_{100}(1) $ into powers of distinct primes.

Solution: As usual, we try a few small cases and look for an induction (this is important to get in the habit of, especially when an arbitrary large number is given, i.e. $ 100 $). First, convert the derivative into an integral; we implicitly use the initial conditions without actually citing them in the integration. So we find that

$ \displaystyle f_1(x) = \int f_0(x+1)dx = \int 1dx = x $

$ \displaystyle f_2(x) = 2 \int f_1(x+1)dx = 2 \int (x+1)dx = x^2+2x = x(x+2) $

$ \displaystyle f_3(x) = 3 \int f_2(x+1)dx = 3 \int (x^2+4x+3) dx = x^3+6x^2+9x = x(x+3)^2 $

$ \displaystyle f_4(x) = 4 \int f_3(x+1)dx = 4 \int (x^3+9x^2+24x+16)dx = x(x+4)^3 $.

By this point, you hope you have figured the pattern out because the next calculations get pretty nasty and we don't want to go there. So we conjecture, naturally, that $ f_n(x) = x(x+n)^{n-1} $ for all $ n \ge 1 $. We can show this by induction, with the base cases given above.

$ \displaystyle f_{n+1}(x) = (n+1) \int f_n(x+1)dx = (n+1) \int (x+1)(x+n+1)^{n-1} dx $.

So... how do we integrate this. In fact, a common strategy can be applied here: whenever some function of $ x $ is taken to a power, try substituting that function of $ x $ out. Here, we use $ u = x+n+1 $ so $ du = dx $. The new integral becomes

$ \displaystyle f_{n+1}(x) = (n+1) \int (u-n)u^{n-1}du = (n+1) \int (u^n-nu^{n-1})du = u^{n+1}-(n+1)u^n $.

Factoring and substituting back, we get

$ f_{n+1}(x) = u^n(u-n-1) = (x+n+1)^n(x+n+1-n-1) = x(x+n+1)^n $,

completing the induction. Hence

$ f_n(x) = x(x+n)^{n-1} \Rightarrow f_{100}(1) = 101^{99} $,

which is, incidentally, the prime factorization. QED.

--------------------

Comment: A really cool problem involving function recursion and a nice induction. Remember that whenever you see a recursion induction is the way to go!

--------------------

Practice Problem: Solve the equation $ 2z^3-(5+6i)z^2+9iz+(1-3i) = 0 $ knowing that one of the solutions is real.

Saturday, November 18, 2006

The Matrix?! Topic: Calculus/Linear Algebra.

Definition: A square matrix $ A $ is diagonalizable if there exist matrices $ V, D $ such that $ V $ is invertible, $ D $ is diagonal (has only entries on the main diagonal), and $ A = VDV^{-1} $.

--------------------

Problem: Solve the differential equation $ x^{\prime}(t) = Ax(t) $ where $ A $ is a diagonalizable $ n \times n $ square matrix and $ x(t) $ is a vector valued function (which must have $ n $ components).

Solution: Well this is a lot like the regular differential equation $ y^{\prime} = ky $ which has solution $ y = y_0 e^{kt} $. So let's guess that

$ x(t) = e^{At}x(0) $

is the solution. But what exactly does $ e^{At} $ mean? Consider the following.

--------------------

Definition: If $ f $ is a function and $ D $ is a diagonal square matrix, then we say that $ f(D) $ is simply the matrix with $ f $ applied to all the elements on the diagonal.

If $ A $ is diagonalizable, then we have $ A = VDV^{-1} $ and we define $ f(A) = Vf(D)V^{-1} $.

--------------------

So does $ x(t) = e^{At}x(0) $ work as a solution to the differential equation? It turns out that, by using the definition of the derivative we can show that

$ \frac{d}{dt}[e^{Dt}] = De^{Dt} $

and

$ \frac{d}{dt}[e^{At}x(0)] = VDe^{Dt}V^{-1}x(0) $.

However, if we use the function $ g(x) = xe^{xt} $, we get

$ Ae^{At}x(0) = g(A)x(0) = Vg(D)V^{-1}x(0) = VDe^{Dt}V^{-1}x(0) $,

so $ x^{\prime}(t) = \frac{d}{dt}[e^{At}x(0)] = Ae^{At}x(0) = Ax(t) $,

as desired. QED.

--------------------

Comment: Applying functions to matrices is a super cool thing and being able to solve differential equations with matrix coefficients is even better. Linear algebra provides all sorts of new ways to work with matrices.

--------------------

Practice Problem: Show that all symmetric square matrices are diagonalizable.

Thursday, November 16, 2006

Gammazzz. Topic: Calculus/S&S.

Definition: (Gamma function) $ \displaystyle \Gamma(z) = \int_0^{\infty} t^{z-1}e^{-t} dt = \lim_{n \rightarrow \infty} \frac{n! \cdot n^z}{z(z+1)(z+2) \cdots (z+n)} $. Note that the Gamma function satisfies the property $ \Gamma(z+1) = z\Gamma(z) $ and $ \Gamma(k+1) = k! $ for nonnegative integers $ k $.

--------------------

Problem: Prove that $ \Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.

Solution: We use the second definition for $ \Gamma $ given above to evaluate

$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{n! \cdot n^z}{z(z+1)(z+2) \cdots (z+n)} \cdot \lim_{n \rightarrow \infty} \frac{n! \cdot n^{-z}}{-z(-z+1)(-z+2) \cdots (-z+n)} $.

Divide the top and bottom by $ n! $ in both limits to obtain

$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{ n^z}{z(1+\frac{z}{1})(1+\frac{z}{2}) \cdots (1+\frac{z}{n})} \cdot \lim_{n \rightarrow \infty} \frac{n^{-z}}{-z(1-\frac{z}{1})(1-\frac{z}{2}) \cdots (1-\frac{z}{n})} $.

Multiplying these limits together, we get

$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{ 1}{-z^2(1+\frac{z}{1})(1-\frac{z}{1})(1+\frac{z}{2})(1-\frac{z}{2}) \cdots (1+\frac{z}{n})(1-\frac{z}{n})} $.

Notice, however, that

$ \displaystyle \sin{\pi z} = \pi z \cdot \prod_{n=1}^{\infty} \left(1+\frac{z}{n}\right)\left(1-\frac{z}{n}\right) $

because it has zeros at all of the integers and has first term $ \pi z $ (by Taylor series). So

$ \displaystyle \Gamma(-z) \Gamma(z) = \frac{ 1}{-z \cdot \frac{\sin{\pi z}}{\pi}} $

and

$ \displaystyle -z \Gamma(-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.

Lastly, by the property $ \Gamma(1-z) = -z\Gamma(-z) $, we arrive at the identity

$ \Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.

QED.

--------------------

Comment: The Gamma function is quite a useful function and actually shows up a lot in higher-level math. The proof above assumes the equivalence of the two forms of the function, which can be proven using the fact that the Gamma function is log-convex. In fact, in can be shown that only one such function exists that satisfies $ f(x+1) = xf(x) $ and is log-convex.

--------------------

Practice Problem: Show that $ \Gamma(x) $ is log-convex for $ x > 0 $, i.e. $ \log{\Gamma(tx_1+(1-t)x_2)} \le t\log{\Gamma(x_1)}+(1-t)\log{\Gamma(x_2)} $ for all $ 0 \le t \le 1 $.

2006 Fall Classic Tests.

Practice Tests:

Team
Ciphering
Individual

Tuesday, November 14, 2006

You Have Been Numberized. Topic: Calculus/Statistics.

Introduction: Consider devising a rating system in which matches consisting of an arbitrary number of "players" are "played." After each match, the rating of each player should reflect the position of that player among all competitors.

Solution: Obviously, this is only an idea. There's not a set "solution" to his. Let the players be $ P_1, P_2, \ldots, P_n $ with corresponding ratings $ R_1, R_2, \ldots, R_n $ and volatilities $ V_1, V_2, \ldots, V_n $. We will let the initial values of $ R_i $ be $ 1000 $ and the initial values of $ V_i $ be $ 100 $. Basically, the performance of player $ P_i $ will be a random variable with normal distribution with mean $ R_i $ and standard deviation $ V_i $. The performance distribution will be

$ S_i(x) = \frac{1}{V_i \sqrt{2\pi}} e^{-\frac{(x-R_i)^2}{2V_i^2}} $.

We let $ \displaystyle D_i(x) = \int_{-\infty}^x S_i(t) dt $ be the cumulative distribution.

Assume $ k $ players $ P_1, P_2, \ldots, P_k $ participate in a match. We will recalculate ratings according to the following steps:

(1) Find the weight of the competition, depending on the total rating of the participants. We would like it to take on values between zero and one, so try something like

$ W = \left(\frac{\sum_{i=1}^k R_i}{\sum_{i=1}^n R_i}\right)^{\frac{1}{C}} $,

for a suitable constant $ C > 1 $. Basically, this is small when there are not a lot of players, but increases rapidly as the rating of the participants approaches the total rating.

(2) For a player $ P_m $, we will calculate his expected rank based on the normal distribution. Let $ P(m, j) $ be the probability that $ P_m $ loses to $ P_j $. It turns out to be

$ \displaystyle P(m,j) = \int_{-\infty}^{\infty} S_j(x) \cdot D_m(x) dx $.

So if we sum up the probabilities that $ P_m $ will lose to each player (including himself), we get the expected rank $ E_m $ of $ P_m $, or

$ \displaystyle E_m = 0.5+\sum_{j=1}^k P(m,j) $,

where the $ 0.5 $ is to shift the best ranking to $ 1 $.

(3) The actual rank of $ P_m $ is $ A_m $, and we want to base rating changes on the difference $ E_m-A_m $. If $ I_m $ is the number of competitions $ P_m $ has participated in, calculate the new $ R^{\prime}_m $ as

$ R^{\prime}_m = R_m+\frac{WK}{I_m+1}(E_m-A_m) $,

where $ K $ is a suitable positive constant for equating differences in ranking to differences in rating.

(4) We now update the volatility $ V_m $. If the player's performance is relatively close to the expected performance, we want $ V_m $ to decrease to imply stability of the player. On the other hand, if the player performs exceptionally well or poorly, we want $ V_m $ to increase to reflect inconsistency. We propose the following change:

$ V^{\prime}_m = V_m+WQ_1(|E_m-A_m|-Q) $,

where $ Q_1, Q_2 $ are suitable positive constants. $ Q_1 $ determines the magnitude of volatility change and $ Q_2 $ determines the threshold for which performance is considered relatively constant.

All in all, this rating system depends a lot on experimentally finding the constants to put into working use. Otherwise, it seems like it would work.

--------------------

Comment: Rating systems are always controversial and people are always trying to devise ways to get around it. On a lighter note, many restrictions can be added like a rating change cap or a volatility cap. Stabilizers for higher ratings are also a possibility (i.e. reducing the weight of each match). Any thoughts?

Monday, November 13, 2006

Step By Step. Topic: Calculus.

Problem #1: Show that, for all continuous, integrable functions $ f $ on the interval $ [a,b] $ (plus whatever other conditions you need), we have

$ \displaystyle \left(\int_a^b f(x)dx \right)^2 = \int_a^b \int_a^b f(x) \cdot f(y) dx dy $.

Problem #2: Show that, for all continuous, integrable functions $ f $ on the interval $ [0, \infty] $ (plus whatever other conditions you need), we have

$ \displaystyle \int_0^{\infty} \int_0^{\infty} f(x) \cdot f(y) dx dy = \int_0^{\frac{\pi}{2}} \int_0^{\infty} r \cdot g(r, \theta) dr d\theta $

where $ (r, \theta) $ are the polar coordinates for $ (x,y) $ and $ g(r, \theta) = f(x) \cdot f(y) $. Indeed, we notice that $ g(r, \theta) = f(r \cos{\theta}) \cdot f(r \sin{\theta}) $.

Problem #3: Use the above two properties to evaluate

$ \displaystyle \int_0^{\infty} e^{-x^2} dx $.

--------------------

Comment: Probably one of the coolest tricks in my limited calculus background. The first identity is nice to know and simple to prove, but the second is the really big leap that is useful for evaluating definite integrals which you cannot simply integrate. Hopefully, doing each of these problems, the method will become clear.

Saturday, November 11, 2006

Simon Says Factor. Topic: Linear Algebra.

Problem: (2003 IMC Day 2 - #1) Let $ A $ and $ B $ be $ n \times n $ real matrices such that $ AB+B+A = 0 $. Prove that $ AB = BA $.

Solution: Well, seeing the $ AB+B+A $, we think Simon's Favorite Factoring Trick. But, we have to remember that we're working with matrices, not numbers. So instead of $ (x+1)(y+1) = xy+x+y+1 $, we have

$ (A+I_n)(B+I_n) = AB + I_nB+AI_n+I_n^2 $,

where $ I_n $ is the $ n \times n $ identity matrix (the equivalent of $ 1 $ in the matrix world). Recall that for any square matrix $ M $ we have $ MI = IM = M $ (where $ I $ is the identity matrix of corresponding size), so

$ AB + I_nB+AI_n+I_n^2 = AB+B+A+I_n = 0+I_n = I_n $

from the given condition. But the equation

$ (A+I_n)(B+I_n) = I_n $

means that $ A+I_n $ and $ B+I_n $ are inverses of each other. Which consequently means we can multiply them in either order. So

$ (A+I_n)(B+I_n) = (B+I_n)(A+I_n) $

$ AB+B+A+I_n = BA+A+B+I_n \Rightarrow AB = BA $

as desired. QED.

--------------------

Comment: Just basic algebraic manipulations if you knew several key properties of matrices and recognized the factoring. Always good to keep these things handy!

--------------------

Practice Problem: (1999 IMC Day 1 - #1) Show that for all natural numbers $ n $ there exists a real $ n \times n $ matrix $ A $ such that $ A^3 = A+I $. Furthermore, prove that $ |A| > 0 $ for all such $ A $. [Reworded]

Thursday, November 9, 2006

Dimensions Everywhere. Topic: Linear Algebra.

Problem: (1998 IMC Day 1 - #1) Let $ V $ be a $ 10 $-dimensional real vector space and $ U_1, U_2 $ two linear subspaces such that $ U_1 \subseteq U_2 $, $ \dim U_1 = 3 $, and $ \dim U_2 = 6 $. Let $ \epsilon $ be the set of linear maps $ T: V \rightarrow V $ which have $ T(U_1) \subseteq U_1 $ and $ T(U_2) \subseteq U_2 $. Calculate the dimension of $ \epsilon $.

Solution: Well we have a total of $ 10 \cdot 10 = 100 $ degrees of freedom without the constraints. The constraint $ T(U_1) \subseteq U_1 $ takes away $ 3 \cdot 7 = 21 $ (since all three dimensions of $ U_1 $ have to map to $ U_1 $), while the constraint $ T(U_2) \subseteq U_2 $ takes away an additional $ 3 \cdot 4 = 12 $ (since the remaining three dimensions of $ U_2 $ have to map to $ U_2 $). Hence the total number of dimensions of linear maps is $ 100-21-12 = 67 $. QED.

--------------------

Comment: A pretty standard problem on linear transformations and linear algebra. Easy by most standards, since it really only involves some simple calculations.

--------------------

Practice Problem: What is the dimension of the span of the vectors $ (1, 2, 3) $, $ (2, 3, 4) $, and $ (0, -1, -2) $?

Tuesday, November 7, 2006

Rank Me! Topic: Linear Algebra.

Problem: (2005 IMC Day 1 - #1) Let $ A $ be a matrix such that $ A_{ij} = i+j $. Find the rank of $ A $.

Solution: Well we clearly have the $ k $th row of the matrix equal to

$ R_k = (k+1, k+2, \ldots, k+n) $.

So we just need to reduce this to row-echelon form through elementary operations. Well, we can take

$ R_k \rightarrow R_k-R_{k-1} $

for $ k \ge 2 $, so we now have

$ R_1 = (2, 3, \ldots, n+1) $ and $ R_k = (1, 1, \ldots 1) $

for $ k \ge 2 $. Now for all $ k > 2 $, take

$ R_k \rightarrow R_k-R_2 $

and take $ R_1 \rightarrow R_1-2R_2 $ so our matrix becomes

$ R_1 = (0, 1, \ldots, n-1) $, $ R_2 = (1, 1, \ldots, 1) $, and $ R_k = (0, 0, \ldots, 0) $

for $ k>2 $. Swapping $ R_1 $ and $ R_2 $ gives us row-echelon form, from which we can easily see that there are two pivot elements, which means the rank of $ A $ is two. Note that for $ k = 1 $ the rank is one. QED.

--------------------

Comment: Really not a difficult problem if you know any fundamental linear algebra and the operations you can perform on a matrix to calculate its rank. Linear algebra is super cool and even useful on competitions (when you get to college)!

--------------------

Practice Problem: For any matrix $ A $, prove that $ \text{rank}(A) = \text{rank}(A^T) $.

What Are The Chances Of That?

Tutorials:

Probability

Sunday, November 5, 2006

Oh Really... Topic: S&S/Sets. Level: AIME/Olympiad.

Problem: (1999 IMC Day 1 - #2) Does there exist a bijective map $ f(n): \mathbb{N} \rightarrow \mathbb{N} $ so that $ \displaystyle \sum_{n=1}^{\infty} \frac{f(n)}{n^2} $ is finite?

Solution: Consider the bijective map $ f_k(n): \{1,2, \ldots, k\} \rightarrow \{1,2, \ldots k\} $ and the sum $ \displaystyle S_k = \sum_{n=1}^k \frac{f_k(n)}{n^2} $. Since the sets

$ \{1, 2, \ldots, k \} $ and $ \{\frac{1}{1^2}, \frac{1}{2^2}, \cdots, \frac{1}{k^2} \} $

are oppositely ordered, by Rearrangement we have

$ \displaystyle S_k \ge \frac{1}{1^2}+\frac{2}{2^2}+\cdots+\frac{k}{k^2} = \sum_{n=1}^k \frac{1}{n} $.

Hence $ \displaystyle \lim_{k \rightarrow \infty} S_k \ge \lim_{k \rightarrow \infty} \sum_{n=1}^k \frac{1}{n} = \infty $, so the answer is no. QED.

--------------------

Comment: Basically, knowledge of the Rearragement inequality killed this problem. It's also useful to know what a bijective map is - a function $ f:A \rightarrow B $ where $ A $ and $ B $ are sets such that for any $ x, y \in A $ we have $ f(x) = f(y) \Rightarrow x = y $ and for any $ b \in B $ there exists $ x \in A $ such that $ f(x) = b $. A lot of higher level math is vocabulary and notation, which is sort of a pity, but that's just how it is.

--------------------

Practice Problem: Evaluate $ \displaystyle \int_0^{\infty} e^{-x^2}dx $.

Saturday, November 4, 2006

Fall Classic Practice Tests

Practice Tests:

2000
2001
2002

Pick And Choose. Topic: Combinatorics. Level: AMC.

Problem: (2006 Math Is Cool Championships) How many rectangles can you make in a $ 5 \times 6 $ grid? [Reworded]

Solution #1: Here's the "harder" solution, involving counting and subtracting. Consider all $ 42 $ points (since there are $ 5 $ and $ 6 $ sides). Pick any point first; there are $ 42 $ possible. Then pick any point not on the same horizontal/vertical lines. There are $ 42-6-7+1 = 30 $ of these. Form a rectangle with these as opposite vertices. We get $ 42*30 $ rectangles.

But wait, each rectangle has four vertices so we overcount by $ 4 $ times (pick vertices $ 1-3 $, $ 3-1 $, $ 2-4 $, $ 4-2 $). So divide by $ 4 $ and get $ 315 $. QED.

--------------------

Solution #2: Here's the "clever" solution. We will pick four lines - two vertical and two horizontal. It's not hard to see that each set of four lines will determine a unique rectangle (formed by the four intersection points). Hence the result is

$ 6C2 \cdot 7C2 = 15 \cdot 21 = 315 $.

QED.

--------------------

Comment: The first is probably more practical in a competition setting, but the second is probably more generally applicable and more educational. In any case, it's good to see two approaches to the same problem.

--------------------

Practice Problem: (2006 Math Is Cool Championships) How many integers are in the range of the function $ f(x) = \frac{4x^2+75}{2x^2+3} $?

Friday, November 3, 2006

Congratulations!

Bellevue's 9th/10th team placed third!

Bellevue's 11th/12th team placed second!

Wednesday, November 1, 2006

Do The Math... In Your Head.

Tutorials:

Mental Math Tricks

The Cheat Is To The Limit. Topic: Calculus.

Problem: (1997 IMC Day 1 - #1) Let $ \{e_n\}^{\infty}_{n=1} $ be a sequence of positive reals with $ \displaystyle \lim_{n \rightarrow \infty} e_n = 0 $. Find

$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} $.

Solution: First of all, you should notice that this looks a lot like a Reimann Sum, so maybe we can transform it into an integral. In fact, this will solve the problem. First, bound the limit from below by

$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \ge \int^1_0 \ln{x}dx = [x\ln{x}-x]^1_0 = -1 $.

We then also have

$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \le \int_0^1 \ln{(x+e)} dx = [(x+e)\ln{(x+e)}-(x+e)]^1_0 = (1+e)\ln(1+e)-(1+e) $

since for $ n $ big enough that we have $ e_n \le e $. Thus we can take $ e \rightarrow 0 $ as $ e_n \rightarrow 0 $. This means

$ \displaystyle -1 \le \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \le -1+(1+e)\ln(1+e)-e $.

Since the upper bound and lower bound both converge to $ -1 $, the limit is $ -1 $. QED.

--------------------

Comment: A classic example of Reimann Sums, which every calculus student should be familiar with (otherwise they aren't really learning calculus...). Applying some simple bounding techniques gives us the answer pretty easily.

--------------------

Practice Problem: (1997 IMC Day 1 - #2) Let $ a_n $ be a sequence of reals. Suppose $ \displaystyle \sum a_n $ converges. Do these sums converge as well?

(a) $ a_1+a_2+(a_4+a_3)+(a_8+\cdots+a_5)+(a_{16}+\cdots+a_9)+\cdots $

(b) $ {a_1+a_2+(a_3)+(a_4)+(a_5+a_7)+(a_6+a_8)+(a_9+a_{11}+a_{13}+a_{15}) +(a_{10}+a_{12}+a_{14}+a_{16})+\cdots $

Monday, October 30, 2006

Symmetry For The Win. Topic: Calculus.

Problem: Evaluate the improper integral $ I = \displaystyle \int_0^{\pi} \ln{(\sin{x})} dx $.

Solution: As the topic suggests, we will look for a symmetry to simplify the problem. Notice the identity $ \displaystyle \int_0^a f(x) dx = \int_0^a f(a-x) dx $ (since we're just integrating across the interval from different "directions." Using this, we have

$ \displaystyle I = \int_{0}^{\frac{\pi}{2}}\ln{(\sin{x})}dx = \int_{0}^{\frac{\pi}{2}}\ln{\left(\sin{\left(\frac{\pi}{2}-x\right)}\right)}dx = \int_{0}^{\frac{\pi}{2}}\ln{(\cos{x})}dx $.

Adding the old integral to the new one, we have

$ \displaystyle 2I = \int_{0}^{\frac{\pi}{2}}\ln{(\sin{x})}dx+\int_{0}^{\frac{\pi}{2}}\ln{(\cos{x})}dx = \int_{0}^{\frac{\pi}{2}}\ln{\left(\frac{1}{2}\sin{2x}\right)}dx $

from the property of logs and the double-angle identity (here it is again!). But in fact this expression is simply

$ \displaystyle 2I =-\frac{\pi}{2}\ln{2}+\frac{1}{2}\int_{0}^{\pi}\ln{(\sin{u})}du $ (*)

by the substitution $ u = 2x $. Taking into the account of the symmetry of $ \sin{x} $ from $ 0 $ to $ \pi $, we get $ \sin{x} = \sin{(\pi-x)} $ so

$ \displaystyle \frac{1}{2}\int_{0}^{\pi}\ln{(\sin{u})}du = \int_{0}^{\frac{\pi}{2}}\ln{(\sin{u})}du = I $.

Plugging back into (*) we obtain $ \displaystyle 2I =-\frac{\pi}{2}\ln{2}+I $ so we get $ I = -\frac{\pi}{2}\ln{2} $. QED.

--------------------

Comment: The function is not nice to actually integrate; it involves the polylogarithm function if you try here. This is an important example of how symmetry can help a ton in integration because there are so many functions that cannot be integrated with elementary functions but can be evaluated over an interval through different techniques.

--------------------

Practice Problem: Evaluate $ \displaystyle \int_0^{\frac{\pi}{2}} \sin^4{x} dx $ without actually finding the antiderivative.

Sunday, October 29, 2006

Out Of Place Trig. Topic: Geometry/Trigonometry. Level: AMC/AIME.

Problem: (2006-2007 Warm-Up 4) Two sides of a triangle are $ 4 $ and $ |\cos{\theta}| $ and the angle between them is $ \theta $. If $ 0 < \theta < \pi $, what is the maximum area of the triangle?

Solution: Well, given two sides and the angle between them, there is only one formula for the area of a triangle that comes to mind. $ [ABC] = \frac{1}{2}ab \sin{(\angle C)} $. So plugging this in, we get the area to be

$ \frac{1}{2} (4) |\cos{\theta}| \sin{\theta} = 2 \sin{\theta} |\cos{\theta}| $.

By the double-angle identity, this is equal to

$ |\sin{(2\theta)}| $

for $ 0 < \theta < \pi $. But the maximum of this is clearly $ 1 $ and it is obtained when $ \theta = \frac{\pi}{4}, \frac{3 \pi}{4} $. QED.

--------------------

Comment: This problem shouldn't have been too hard; finding that specific formula for the area of the triangle could have been derived by drawing an altitude. And the double-angle identity should always be known. Lastly, maximizing the sine function is a given.

--------------------

Practice Problem: (2006 Skyview) If $ \sin{x} = \frac{3}{5} $ and $ \cos{x} = \frac{4}{5} $, what is $ \sin{(3x)} $?

Friday, October 27, 2006

Square Sum Stuff. Topic: Polynomials/S&S. Level: AMC/AIME.

Problem: Evaluate the summation

$ \displaystyle \sum_{k=1}^{\infty} \frac{k^2}{2^k} = \frac{1^2}{2^1}+\frac{2^2}{2^2}+\frac{3^2}{2^3}+\frac{4^2}{2^4}+\cdots $.

Solution: Here's a technique that will help you evaluate infinite series that are of the form polynomial over exponential. It's based on the idea of finite differences:

If $ P $ is a polynomial with integer coefficients of degree $ n $ then

$ P(x+1)-P(x) $

is a polynomial of degree $ n-1 $ (not hard to show; just think about it).

So let

$ S = \frac{1^2}{2^1}+\frac{2^2}{2^2}+\frac{3^2}{2^3}+\frac{4^2}{2^4}+\cdots $.

Then consider $ 2S $ by simply multiplying each term by $ 2 $:

$ 2S = 1^2+\frac{2^2}{2^1}+\frac{3^2}{2^2}+\frac{4^2}{2^3}+\cdots $.

And now find the difference $ 2S-S = S $ by subtracting the terms with equal denominators. We get

$ S = 2S-S = 1+\frac{2^2-1^2}{2^1}+\frac{3^2-2^2}{2^2}+\frac{4^2-3^2}{2^3}+\cdots $

$ S = 1+\frac{3}{2^1}+\frac{5}{2^2}+\frac{7}{2^3}+\cdots $.

Notice that the numerator is now a polynomial of degree $ 1 $ instead of $ 2 $. Repeating this, we have

$ 2S = 2+3+\frac{5}{2^1}+\frac{7}{2^2}+\cdots $

and

$ S = 2S-S = 2 + 2 + \frac{5-3}{2^1}+\frac{7-5}{2^2}+\cdots $

$ S = 2 + (2 + 1+\frac{1}{2}+\cdots) $.

Notice that the latter part is just a geometric series which sums to

$ 2 + 1 + \frac{1}{2} + \cdots = 4 $

so $ S = 2+4 = 6 $. QED.

--------------------

Comment: The method of finite differences is extremely useful and is basically a simplified version of calculus - $ P(x+1)-P(x) \approx P^{\prime}(x) $ in a very approximating sense. It's a good thing to know, though, because then you have a better understanding of how polynomials work.

--------------------

Practice Problem: Let $ P $ be a polynomial with integer coefficients. Using the method of finite differences, predict the degree of $ P $.

$ P(1) = 1 \mbox{ } P(2) = 9 \mbox{ } P(3) = 20 \mbox{ } P(4) = 36 \mbox{ } P(5) = 59 \mbox{ } P(6) = 91 $.

Wednesday, October 25, 2006

Serious Substitution. Topic: Calculus.

Problem: Use the substitution $ z = \sqrt{x} $ to solve the differential equation $ 4x \frac{d^2y}{dx^2}+2 \frac{dy}{dx}+y = 0 $.

Solution: Well, let's do what it says. From the chain rule, we have

$ \displaystyle \frac{dy}{dz} = \frac{dy}{dx} \cdot \frac{dx}{dz} = 2z \frac{dy}{dx} $.

Then we also have by the product rule and chain rule again

$ \displaystyle \frac{d^2y}{dz^2} = 2 \frac{dy}{dx}+2z \frac{d^2y}{dx^2} \cdot \frac{dx}{dz} = \frac{1}{z} \cdot \frac{dy}{dz}+4z^2 \frac{d^2y}{dx^2} $.

So we can make the substitutions

$ \displaystyle 4x \frac{d^2y}{dx^2} = 4z^2 \frac{d^2y}{dx^2} = \frac{d^2y}{dz^2}-\frac{1}{z} \cdot \frac{dy}{dz} $

and

$ \displaystyle \frac{dy}{dx} = \frac{1}{2z} \cdot \frac{dy}{dz} $

to obtain the differential equation

$ \frac{d^2y}{dz^2}-\frac{1}{z} \cdot \frac{dy}{dz} + 2 \cdot \frac{1}{2z} \cdot \frac{dy}{dz}+y = \frac{d^2y}{dz^2}+y = 0 $.

But we know the solution to this is $ y = C_1 \cos{z}+C_2 \sin{z} $ so our final solution is

$ y = C_1 \cos{\sqrt{x}}+C_2 \sin{\sqrt{x}} $.

QED.

--------------------

Comment: This substitution is, of course, not really natural but was actually found after solving the ODE in another way. Fortunately, it simplifies the problem rather greatly and seems to be a useful technique to look out for.

--------------------

Practice Problem: Find another way to solve the differential equation.

New Stuff!

My Tests:

2006-2007 Warm-Up 4

Tuesday, October 24, 2006

Everything You Ever Wanted To Know About Mods...

Tutorials:

Modular Arithmetic

As It Goes. Topic: Sets. Level: AIME.

Problem: (1992 Putnam - B1) Let $ S $ be a set of $ n $ distinct real numbers. Let $ A_s $ be the set of numbers that occur as averages of two distinct elements of $ S $. For a given $ n \ge 2 $, what is the smallest possible number of elements in $ A_s $?

Solution: We claim that $ |A_s| \ge 2n-3 $. To show this, let the elements of $ S $ be $ a_1 < a_2 < \cdots < a_n $. We can easily see that

$ \frac{a_1+a_2}{2} < \frac{a_1+a_3}{2} < \cdots < \frac{a_1+a_n}{2} $,

giving us $ n-1 $ distinct elements of $ A_s $. Furthermore,

$ \frac{a_1+a_n}{2} < \frac{a_2+a_n}{2} < \cdots < \frac{a_{n-1}+a_n}{2} $,

giving us an additional $ n-2 $ distinct elements of $ A_s $ for a total of $ 2n-3 $ elements. For any $ n \ge 2 $, the set $ S = \{1, 2, \ldots, n\} $ gives $ A_s = \{1.5, 2, 2.5, \ldots, n-0.5\} $, which has precisely $ 2n-3 $ elements. QED.

--------------------

Comment: A good strategy for solving minima/maxima problems (inequalities, too) is to find a case which you think may be a minimum/maximum and try to prove it. In this case, you can guess that the set $ \{1, 2, \ldots, n\} $ gives the minimum number of distinct average elements and try to find that many distinct elements in an arbitrary set, as shown above through ordering.

--------------------

Practice Problem: What if you consider the average of three elements?

Sunday, October 22, 2006

Integrrr... Topic: Algebra. Level: AIME.

Problem: (1992 Putnam - A1) Prove that $ f(n) = 1-n $ is the only integer-valued function defined on the integers that satisfies the following conditions.

(i) $ f(f(n)) = n $;
(ii) $ f(f(n+2)+2) = n $;
(iii) $ f(0) = 1 $.

Solution: We will use induction to prove the claim for $ -k \le n \le k+1 $ where $ k $ is a nonnegative integer.

Base Case: $ k = 0 \Rightarrow f(0) = 1, f(1) = 0 $ which are true from (iii) and (i).

Induction Step: Assume that $ f(n) = 1-n $ for $ -k \le n \le k+1 $.

Then $ f(k+2) = f(f(-k+1)+2) = -k-1 = 1-(k+2) $ by our induction hypothesis and (ii). Also, $ f(-(k+1)) = k+2 = 1-(-(k+1)) $ by (i), completing the induction.

Hence $ f(n) = 1-n $ for all integers. QED.

--------------------

Comment: Functional equations over the integers are not usually too hard to handle; induction is usually a good way to handle them, strong induction in particular. Most of the time you just need to find out what set to induct on, in this case $ [-k,k+1] $.

--------------------

Practice Problem: Find all functions $ f: \mathbb{N} \rightarrow \mathbb{N} $ such that

(1) $ f(f(n)+n) = f(n) $;
(2) There exists a $ n_0 $ such that $ f(n_0) = 1 $.

Saturday, October 21, 2006

Practice Makes Perfect. Topic: All. Level: AIME/Olympiad.

Some problems that I have done and not felt like writing solutions to...

Practice Problem #1: (2004 Putnam - A2) For $ i = 1, 2 $ let $ T_i $ be a triangle with side lengths $ a_i, b_i, c_i $, and area $ A_i $. Suppose that $ a_1 \le a_2 $, $ b_1 \le b_2 $, $ c_1 \le c_2 $ and that $ T_2 $ is an acute triangle. Does it follow that $ A_1 \le A_2 $?

Practice Problem #2: (2001 Putnam - B3) For any positive integer $ n $, let $ \langle n \rangle $ denote the integer closest to $ \sqrt{n} $. Evaluate

$ \displaystyle \sum_{n=1}^{\infty} \frac{2^{\langle n \rangle}+2^{-\langle n \rangle}}{2^n} $.

Practice Problem #3: (2000 Putnam - B2) Prove that the expression

$ \frac{gcd(m,n)}{n} (nCm) $

is an integer for all pairs of integers $ n \ge m \ge 1 $.

Wednesday, October 18, 2006

Dattebayo... Topic: Algebra/NT/Polynomials. Level: AIME/Olympiad.

Problem: (2004 Putnam - B1) Let $ P(x) = c_nx^n+c_{n-1}x^{n-1}+\cdots+c_0 $ be a polynomial with integer coefficients. Suppose that $ r $ is a rational number such that $ P(r) = 0 $. Show that the $ n $ numbers

$ c_nr, c_nr^2+c_{n-1}r, c_nr^3+c_{n-1}r^2+c_{n-2}r, \ldots, c_nr^n+c_{n-1}r^{n-1}+\cdots+c_1r $

are integers.

Solution: Well rational numbers only really have one good way of being handled - rewrite $ r $ as $ \frac{p}{q} $ for relatively prime integers $ p, q $. We then know that

$ P(r) = \frac{c_np^n+c_{n-1}p^{n-1}q+\cdots+c_0q^n}{q^n} = 0 $

so the numerator is zero. Also, we may write the $ n $ integers as

$ \frac{c_np^n+c_{n-1}p^{n-1}q+\cdots+c_{k+1}p^{k+1}q^{n-(k+1)}}{p^kq^{n-k} $

for $ k = 0, 1, \ldots, n-1 $ (from substituting $ r = \frac{p}{q} $ and finding a common denominator). Notice that

$ c_np^n+c_{n-1}p^{n-1}q+\cdots+c_{k+1}p^{k+1}q^{n-(k+1)} \\ = -(c_kp^kq^{n-k}+c_{k-1}p^{k-1}q^{n-(k-1)}+\cdots+c_0q^n) $

from $ P(r) = 0 $. The LHS gives us that the quantity is divisible by $ p^k $ (since every term has a power of $ p $ greater than or equal to $ k $) and the RHS gives us that the quantity is divisible by $ q^{n-k} $ (since every term has power of $ q $ greater than or equal to $ n-k $). Since $ p $ and $ q $ are relatively prime, however, we must have

$ (p^kq^{n-k}) | (c_np^n+c_{n-1}p^{n-1}q+\cdots+c_{k+1}p^{k+1}q^{n-(k+1)}) $

or

$ \frac{c_np^n+c_{n-1}p^{n-1}q+\cdots+c_{k+1}p^{k+1}q^{n-(k+1)}}{p^kq^{n-k} $

is an integer for $ k = 0, 1, \ldots, n-1 $, as desired. QED.

--------------------

Comment: Actually more difficult than I expected from a B1 on the Putnam; it required solid knowledge of how to deal with rational numbers and roots of polynomials. That said, a good number of people got it right, so maybe I just messed around for a bit too long. In any case, this is a problem worth learning how to do.

--------------------

Practice Problem: Prove that the remainder of $ P(x) $ when divided by $ x-r $ is $ P(r) $.

Tuesday, October 17, 2006

Fraction Faction. Topic: Inequalities/NT. Level: AIME/Olympiad.

Problem: (2005 Putnam - B2) Find all positive integers $ n, k_1, \ldots, k_n $ such that $ k_1+\cdots+k_n = 5n-4 $ and

$ \frac{1}{k_1}+\cdots+\frac{1}{k_n} = 1 $.

Solution: First, by the Cauchy-Schwarz Inequality, we find that

$ \displaystyle \left(\sum k_i \right)\left(\sum \frac{1}{k_i}\right) \ge n^2 \Rightarrow \sum \frac{1}{k_i} \ge \frac{n^2}{5n-4} $.

For $ n > 4 $, however, $ n^2 > 5n-4 \Rightarrow \sum \frac{1}{k_i} > 1 $, which is a contradiction. Hence $ n \le 4 $. Then we proceed by cases:

CASE 1: $ n = 1 $

Clearly $ k_1 = 1 $ is the only solution.

CASE 2: $ n = 2 $

We have $ k_1+k_2 = 6 $ so our possible pairs are $ (1,5); (2,4); (3,3) $ (and permutations), none of which work.

CASE 3: $ n = 3 $

We have $ k_1+k_2+k_3 = 11 $. Notice that $ k_i \neq 1 $ or the sum of the reciprocals will exceed $ 1 $. So our possible triplets are $ (2, 2, 7); (2, 3, 6); (2, 4, 5); (3, 3, 5); (3, 4, 4) $ (and permutations), of which only $ (2, 3, 6) $ works.

CASE 4: $ n = 4 $

Notice that with our use of Cauchy Schwarz above at $ n = 4 $ we have $ \sum \frac{1}{k_i} \ge \frac{n^2}{5n-4} = \frac{16}{16} = 1 $, so equality must hold. Hence $ k_1 = k_2 = k_3 = k_4 = 4 $ is the only solution.

In summary, our solutions are

$ n = 1 $: $ \{1\} $
$ n = 3 $: $ \{2,3,6\} $
$ n = 4 $: $ \{4, 4, 4, 4\} $.

QED.

--------------------

Comment: The only really interesting part of this problem was bounding with Cauchy. After that, it's just manual counting to figure out the solutions. But overall this is a good test of using inequalities to simplify number theory problems.

--------------------

Practice Problem: What if $ k_1+\cdots+k_n = 4n-3 $?

Sunday, October 15, 2006

Great Expectations. Topic: Probability. Level: AMC/AIME.

Definition: The expected value of a random variable $ X $ is $ \displaystyle \sum_x x \cdot p(x) $, where the summation is taken over all possible values $ x $ of $ X $ and $ p(x) $ is the probability that $ X = x $. For example, the expected value of a die roll is

$ 1 \cdot \frac{1}{6}+ 2\cdot \frac{1}{6}+ \cdots + 6 \cdot \frac{1}{6} = \frac{7}{2} $.

--------------------

Problem: What is the expected number of rolls on a standard die that you need to make before every number shows up at least once?

Solution: Let $ E_n $ be the expected number of rolls before $ n $ of the numbers shows up at least once. Clearly, $ E_1 = 1 $. Then

$ E_2 = \frac{5}{6}(E_1+1)+\frac{1}{6}(E_2+1) $

because there is a $ \frac{5}{6} $ probability that it will take exactly one more than $ E_1 $ (if we roll a new number) and a $ \frac{1}{6} $ probability that it will result in the same number as before and we start from the same situation again. Using this same argument, we deduce that

$ E_{n+1} = \frac{6-n}{6}(E_n+1)+\frac{n}{6}(E_{n+1}+1) $,

which is equivalent to

$ E_{n+1} = E_n+\frac{6}{6-n} $.

Since we want to find $ E_6 $, we just apply this recursion $ 6 $ times to get

$ E_6 = 6\left(1+\frac{1}{2}+\cdots+\frac{1}{6}\right) = 14.7 $.

QED.

--------------------

Comment: Expected value is an interesting topic and can result in a bunch of strange recursions, but it's quite cool in that respect. It's "possible" to solve this problem using only probability, but that would get very ugly very quickly.

--------------------

Practice Problem: Generalize this to a die with $ 8 $, $ 12 $, or $ 20 $ sides (the three bigger regular die which exist).

Saturday, October 14, 2006

Sumthing's Up. Topic: Algebra/S&S. Level: AIME.

Problem: (1994 Putnam - A1) Suppose that a sequence $ a_1, a_2, a_3, \ldots $ satisfies $ 0 < a_n \le a_{2n}+a_{2n+1} $ for all $ n \ge 1 $. Prove that the series $ \displaystyle \sum_{n=1}^{\infty} a_n $ diverges.

Solution: Well, let's see. $ a_1 $ is an arbitrary constant. Then

$ a_2+a_3 \ge a_1 $

$ (a_4+a_5)+(a_6+a_7) \ge a_2+a_3 \ge a_1 $.

Whoa, that looks like a pattern... Indeed, by induction we can show that

$ S_k = (a_{2^k}+a_{2^k+1})+(a_{2^k+2}+a_{2^k+3})+\cdots+(a_{2^{k+1}-2}+a_{2^{k+1}-1}) \ge S_{k-1} $

for $ k \ge 1 $ and $ S_0 = a_1 $, so

$ \displaystyle \sum_{n=1}^{\infty} a_n = \sum_{k=1}^{\infty} S_k \ge \sum_{k=1}^{\infty} a_1 $,

which clearly diverges. QED.

--------------------

Comment: Pretty interesting and easy problem, though from the scores evidently a lot of people missed some minor error in rigor (59 people got 9/10, and also 59 people got a full score). In any case, this is a good concept for infinite series, and should be familiar to most people who do math a lot (a.k.a. if you are familiar with the following practice problem).

--------------------

Practice Problem: Without calculus, show that $ \displaystyle \sum_{n=1}^{\infty} \frac{1}{n} $ diverges.

Prime Factorizations... With A Twist!

Tutorials:

Prime Factor Bar Graphs

Thursday, October 12, 2006

Divvy It Up. Topic: Number Theory. Level: AMC.

Problem: Find the number of divisors of a postive integers.

Solution: For all you people at math club who didn't understand it... here it is. Suppose our number is $ n = p_1^{e_1} \cdot p_2^{e_2} \cdots p_k^{e_k} $, where the $ p $'s are the prime divisors of $ n $ and the $ e $'s are the number of times (power) each $ p $ divides $ n $. For example, $ 36 = 2^2 \cdot 3^2 $.

We will show that the number of divisors is $ (e_1+1)(e_2+1) \cdots (e_k+1) $. Let $ d $ be a divisor of $ n $. Write $ d $ as

$ d = p_1^{a_1} \cdot p_2^{a_2} \cdots p_k^{a_k} $.

We know that $ d $ can only have the prime divisors of $ n $ or it obviously won't divide $ n $. So now it remains to choose the $ a $'s. Well, what can $ a_1 $ be? It can be $ 0, 1, 2, \ldots, e_1 $ but if $ a_1 > e_1 $ then $ d $ will have more $ p_1 $'s than $ n $ and won't divide it. So there are $ e_1+1 $ choices there. Similarly, $ a_2 $ can be $ 0, 1, 2, \ldots, e_2 $ and $ a_j $ can be $ 0, 1, 2, \ldots, e_j $ for $ j = 3, 4, \ldots, k $.

So if we just multiply the number of ways we can pick the power on each prime factor, we get the total number of divisors, i.e. $ (e_1+1)(e_2+1) \cdots (e_k+1) $. QED.

--------------------

Comment: This is very important for general number theory; it is one of the fundamental ideas. Plus, it's logical and easy to remember so it should always be accessible during competitions.

--------------------

Practice Problem: (2002-2003 MIC Championships Team Test - #2) How many natural numbers less than $ 50,000 $ have exactly $ 35 $ positive integer factors?

Wednesday, October 11, 2006

Tuesday, October 10, 2006

We're Closed. Topic: Algebra/Sets. Level: AIME.

Problem: (1995 Putnam - A1) Let $ S $ be a set of real numbers which is closed under multiplication (that is, if $ a $ and $ b $ are in $ S $, then so is $ ab $). Let $ T $ and $ U $ be disjoint subsets of $ S $ whose union is $ S $. Given that the product of any three (not necessarily distinct) elements of $ T $ is in $ T $ and that the product of any three elements of $ U $ is in $ U $, show that at least one of the two subsets $ T, U $ is closed under multiplication.

Solution: Let's attack this problem using contradiction. Suppose that both $ T $ and $ U $ are NOT closed under multiplication. What happens? Well, we then know that there are $ a, b \in T $ such that $ ab \in U $ (since $ ab \in S $ and $ ab \not\in T $) and $ c, d \in U $ such that $ cd \in T $. However, we then have

$ abcd = a \cdot b \cdot (cd) \in T $ (three elements in $ T $)

$ abcd = (ab) \cdot c \cdot d \in U $ (three elements in $ U $),

which is a contradiction since $ T $ and $ U $ are disjoint. Hence at least one of them must be closed under multiplication. QED.

--------------------

Comment: Ha, that wasn't so bad was it. Well it took a bit to come up with, but after just realizing contradiction was an easy way to do it and listing out what you knew, putting everything together was pretty quick. Working with sets is a good way to develop your thinking skills since it's quite theoretical.

--------------------

Practice Problem: Which of the standard operations (addition, subtraction, multiplication, division) are the following sets closed under: (a) the reals, (b) the rationals, (c) the integers, (d) the odd integers, (e) the natural numbers, and (f) powers of $ 2 $. Exclude zero from all of these sets.

Sunday, October 8, 2006

New Stuff!

Tutorials:

Triangle Ratios (Advanced)

Think Rationally. Topic: Algebra/NT/Polynomials. Level: AIME/Olympiad.

Problem: (2003 Putnam - B4) Let

$ f(z) = az^4+bz^3+cz^2+dz+e = a(z-r_1)(z-r_2)(z-r_3)(z-r_4) $

where $ a, b, c, d, e $ are integers, $ a \neq 0 $. Show that if $ r_1+r_2 $ is a rational number and $ r_1+r_2 \neq r_3+r_4 $, then $ r_1r_2 $ is a rational number.

Solution: Some important results that you don't need to prove - if $ x $ and $ y $ are rational, then

$ x+y $, $ x-y $, $ xy $, and $ \frac{x}{y} $

are rational (note the last one is only true for $ y \neq 0 $).

We start of by using Vieta's Formulas, which tells us that

$ S_1 = r_1+r_2+r_3+r_4 $

$ S_2 = r_1r_2+r_1r_3+r_1r_4+r_2r_3+r_2r_4+r_3r_4 = (r_1+r_2)(r_3+r_4)+r_1r_2 $

$ S_3 = r_1r_2r_3+r_1r_2r_4+r_1r_3r_4+r_2r_3r_4 = r_1r_2(r_3+r_4)+r_3r_4(r_1+r_2) $

are all rational. But since we know $ r_1+r_2 $ is rational, we must have $ r_3+r_4 = S_1-(r_1+r_2) $ be rational as well. Then

$ (r_1+r_2)(r_3+r_4) $

and

$ r_1r_2+r_3r_4 = S_2-(r_1+r_2)(r_3+r_4) $

are rational. Let $ k = r_3+r_4-r_1+r_2 \neq 0 $, which we know is rational. Then

$ S_3 = r_1r_2(r_1+r_2+k)+r_3r_4(r_1+r_2) = (r_1r_2+r_3r_4)(r_1+r_2)+r_1r_2k $,

but $ (r_1r_2+r_3r_4)(r_1+r_2) $ is rational, so $ r_1r_2k $ must be rational as well. Finally, since $ k $ is rational and nonzero, we have $ r_1r_2 $ rational. QED.

--------------------

Comment: This problem required slugging through some rational/irrational arguments, but turned out to be not so bad. As long as you recognized the use of Vieta's Formulas, you should have been able to work out the details to get to the solution.

--------------------

Practice Problem: Find a polynomial $ p(z) $ with $ r_1+r_2 = r_3+r_4 $ such that the above argument does not hold.

Thursday, October 5, 2006

Binary Soup. Topic: Number Theory. Level: AIME/Olympiad.

Problem: (1998 Putnam - A4) Let $ A_1 = 0 $ and $ A_2 = 1 $. For $ n > 2 $, the number $ A_n $ is defined by concatenating the decimal expansions of $ A_{n-1} $ and $ A_{n-2} $ from left to right. For example $ A_3 = A_2A_1 = 10 $, $ A_4 = A_3A_2 = 101 $, $ A_5 = A_4A_3 = 10110 $, and so forth. Determine all $ n $ such that $ 11 $ divides $ A_n $.

Solution: A recursively defined sequence divisibility problem. Not too uncommon; we use our usual strategy and apply $ \pmod{11} $ and find the recursive relation $ \pmod{11} $. First define $ B_n $ to be the number of digits in $ A_n $; notice, however, that $ B_n = B_{n-1}+B_{n-2} $ and $ B_1 = B_2 = 1 $, meaning that $ B_n $ is simply the Fibonnaci sequence.

We can see that $ A_n $ is just $ A_{n-1} $ followed by $ B_{n-2} $ zeros and then added to $ A_{n-2} $. Translated into math, this is

$ A_n = 10^{B_{n-2}} A_{n-1}+A_{n-2} $.

But we only care about this $ \pmod{11} $, so

$ A_n \equiv (-1)^{B_{n-2}} A_{n-1}+A_{n-2} \pmod{11} $.

Well let's start writing out terms for $ B_n \pmod{2} $ (since that's all we care about) and $ A_n \pmod{11} $.

$ B_n: 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0 \ldots $

$ A_n: 0, 1, 10, 2, 1, 1, 0, 1, 10, 2, 1, 1 \ldots $

We see that every six, both sequences repeat and since they only depend on the two terms before them, we know they will repeat infinitely. Hence our answers are all positive integers $ n \equiv 1 \pmod{6} $. QED.

--------------------

Comment: This problem really wasn't all that hard for an A4; it only required a little sequence construction to find out what the answer was and a rigorous proof even wouldn't be too much of a challenge. It's a good exercise in converting a word problem to something you can work with mathematically and applying mods effectively.

--------------------

Practice Problem: The Fibonnaci sequence is defined by $ F_0 = F_1 = 1 $ and $ F_n = F_{n-1}+F_{n-2} $ for $ n > 1 $. Prove that if $ a|b $ then $ F_a|F_b $.

New Stuff!

Practice Tests:

2003 MIC Ch. Individual
2003 MIC Ch. Individual MC

Tutorials:

Logarithms

Wednesday, October 4, 2006

So Complex, Yet So Plain. Topic: Complex Numbers/Polynomials. Level: AMC/AIME.

Problem: (2006-2007 Warm-Up 3 - #13) What is the area of the shape formed by connecting the solutions to $ x^3-5x+12x-7 = 0 $ in the complex plane?

Solution: Well, let's start off by finding those solutions. It's not hard to guess that $ x= 1 $ is a solution, so we have

$ (x-1)(x^2-4x+7) = 0 $.

Using the quadratic formula, we get the other two solutions to be

$ x = 2 \pm i \sqrt{3} $.

So our points are $ 1, 2+i\sqrt{3}, 2-i\sqrt{3} $. This makes an isosceles triangle with base $ 2 \sqrt{3} $ and height $ 1 $ in the complex plane, which has area $ \sqrt{3} $. QED.

--------------------

Comment: Pretty easy after we guessed that $ x = 1 $ was a solution; a general formula for the area probably gets ugly when the polynomial is irreducible. But in this case, we can draw the triangle and figure out the area without too many tricks.

--------------------

Practice Problem: What is the area of the shape formed by connecting the solutions to $ x^3 = 1 $ in the complex plane?

New Stuff!

My Tests:

2006-2007 Warm-Up 3

Practice Tests:

2002 MIC Ch. Individual
2002 MIC Ch. Individual MC

Tutorials:

Combinatorics

Tuesday, October 3, 2006

Conehead. Topic: Geometry. Level: AMC.

Problem: (1998 Putnam - A1) A right circular cone has base of radius $1$ and height $3$. A cube is inscribed in the cone so that one face of the cube is contained in the base of the cone. What is the side length of the cube?

Solution: Suppose the cube has side length $ s $. Then the top face of the cube has to fit into a circle of radius

$ 1-\frac{s}{3} $

by proportions ($ 3-s $ of the total height of $ 3 $ will still be above the cube). So its side length can be at most

$ \left(1-\frac{s}{3}\right)\sqrt{2} $

since a square with $ \sqrt{2} $ times the radius of a circle is perfectly inscribed. Then we have

$ \left(1-\frac{s}{3}\right)\sqrt{2} \ge s \Rightarrow s \le \frac{9\sqrt{2}-6}{7} $,

so the side length must be the maximum $ \frac{9\sqrt{2}-6}{7} $ in order to be inscribed. QED.

--------------------

Comment: This problem was not difficult at all. Just drawing a picture and using some ratios it's easy to find the inequality and then solve for $ s $ knowing that equality must hold for inscribed things.

--------------------

Practice Problem: Can you do this for a general cone with base radius $ a $ and height $ b $?

Saturday, September 30, 2006

Eww... Topic: Algebra/Inequalties. Level: AIME/Olympiad.

Problem: Find the minimum value of

$ \frac{\left(x+\frac{1}{x}\right)^6-\left(x^6+\frac{1}{x^6}\right)-2}{ \left(x+\frac{1}{x}\right)^3-\left(x^3+\frac{1}{x^3}\right)} $

for $ x > 0 $.

Solution: Make a handy substitution $ a = x $, $ b = \frac{1}{x} $ and $ 1 = a^3b^3 $. Then we want to minimize

$ \frac{(a+b)^6-(a^6+b^6)-2a^3b^3}{(a+b)^3-(a^3+b^3)} = \frac{2a^4+5a^3b+6a^2b^2+5ab^3+2b^4}{a+b} $

after cancelling $ 3ab $. But we see that this evenly divides and becomes

$ 2a^3+3a^2b+3ab^2+2b^2 = (a+b)^3+(a^3+b^3) $.

But $ a+b \ge 2 $ and $ a^3+b^3 \ge 2 $ by AM-GM and the same equality case, so

$ (a+b)^3+a^3+b^3 \ge 10 $

with equality at $ a = b = 1 $. QED.

--------------------

Comment: It was very useful to make the substitution and then homogenize the numerator so you could cancel things out evenly. Then classical inequalities (AM-GM) helped us finish it.

--------------------

Practice Problem: Prove that $ (a^3+b^3+c^3-3abc)(a+b+c) \ge 0 $.

Wednesday, September 27, 2006

This Is Mental Math?!? Topic: Number Theory. Level: AIME.

Problem: (2006-2007 Warm-Up 2) Let $ f(x) $ be the sum of the digits of $ x $. Find $ f(f(f(8765^{4321}))) $.

Solution: Let's try bounding our quantity. We know

$ f(x) < 9 \cdot \lceil \log{x} \rceil $

because $ \lceil \log{x} \rceil $ is the number of digits of $ x $ (except when $ x $ is a power of ten, but those are trivial) and the biggest digit is $ 9 $. So we can say that

$ f(8765^{4321}) < 9 \cdot \lceil \log{8765^{4321}} \rceil < 9 \cdot \log{10000^{10000}} = 360000 $

so

$ f(f(8765^{4321})) < 9 \cdot \lceil \log{360000} \rceil < 9 \cdot \log{1000000} = 54 $

and

$ f(f(f(8765^{4321}))) < 13 $

because the largest sum of digits of any number less than $ 54 $ is $ 4+9 = 13 $. So we know our value is less than $ 13 $. But remember that the sum of the digits of a number is equal to itself $ \pmod{9} $ (prove this!) and $ 8765^{4321} \equiv (-1)^{4321} \equiv 8 \pmod{9} $ so our answer is also $ 8 \pmod{9} $. But the only number less than $ 13 $ that works is then $ 8 $. QED.

--------------------

Comment: Admittedly, this is not likely to show up on a mental math test, but it's feasible to do in your head. The bounding is not horrific and taking the mod is also not bad. Of course, $ 45 $ seconds is pretty quick, but if you knew how to approach the problem already, it was definitely possible (and done).

--------------------

Practice Problem: Show that the sum of the digits of a number is congruent to itself modulo nine.