Problem: (Problem-Solving Through Problems - 6.5.12) Suppose $ n $ is a nonnegative integer and
$ f_n(x) = c_0e^{r_0x}+c_1e^{r_1x}+\cdots+c_ne^{r_nx} $,
where $ c_i $ and $ r_i $ are real numbers. Prove that if $ f $ has more than $ n $ zeros in $ \mathbb{R} $, then $ f(x) \equiv 0 $.
Solution: First, we add the claim that $ f_n(x) = 1 $ can occur for at most $ n+1 $ distinct $ x $.
We proceed by induction. For $ n = 0 $, we have $ f_0(x) = c_0e^{r_0x} $ which we know is not zero unless $ c_0 = 0 $, so if it has a root it must be identically $ 0 $. It also takes the value $ 1 $ at most once because it is monotonic. Now suppose both hypotheses are true for some nonnegative integer $ k $. We want to show that
$ f_{k+1}(x) = c_0e^{r_0x}+c_1e^{r_1x}+\cdots+c_ke^{r_kx}+c_{k+1}e^{r_{k+1}x} $
has at most $ k+1 $ zeros (except when $ c_i = 0 $ for all $ i $) and takes the value $ 1 $ at most $ k+2 $ times. If $ c_{k+1} = 0 $, we apply the inductive hypothesis and it is trivially true, so assume $ c_{k+1} \neq 0 $. From $ f_{k+1}(x) = 0 $, we get
$ c_0e^{r_0x}+c_1e^{r_1x}+\cdots+c_ke^{r_kx} = -c_{k+1}e^{r_{k+1}x} $.
Letting $ b_i = -\frac{c_i}{c_{k+1}} $ and $ q_i = r_i-r_{k+1} $ for $ i = 0, 1, \ldots, k $, we obtain
$ b_0e^{r_0x}+b_1e^{r_1x}+\cdots+b_ke^{r_kx} = 1 $.
By our inductive hypothesis, we know this occurs at most $ k+1 $ times, proving that $ f_{k+1} $ has at most $ k+1 $ zeros or is identically $ 0 $.
Now we want to show that $ f_{k+1}(x) = 1 $ at most $ k+2 $ times. Suppose $ f_{k+1}(x) = 1 $ for $ x_1 < x_2 < \ldots < x_m $ for $ m > k+2 $. Then, by Rolle's Theorem, we have
$ f_{k+1}^{\prime}(x) = c_0r_0e^{r_0x}+c_1r_1e^{r_1x}+\cdots+c_kr_ke^{r_kx}+c_{k+1}r_{k+1}e^{r_{k+1}x} = 0 $
between each consecutive pair of $ x_i $'s, giving us $ m-1 > k+1 $ pairs. But by what we just proved for $ f_{k+1} $ having at most $ k+1 $ zeros, we know $ f_{k+1}^{\prime}(x) = 0 $ at most $ k+1 $ times, since $ f_{k+1}^{\prime} $ is in the same family of functions. Hence $ f_{k+1}(x) = 1 $ for at most $ k+2 $ distinct $ x $, completing the induction.
Therefore, we conclude that $ f_n(x) $ has at most $ n $ zeros or is identically $ 0 $.
--------------------
Comment: A pretty tough problem, took a couple of looks at smaller cases to determine the method with which to solve it. The key is looking for the inductive step, somehow reducing the $ k+1 $ case to the $ k $ case. Adding in the extra condition might not have been necessary but it made it easier to move from case to case. Here's a "flow chart" of what proves what.
$ f_k(x) = 0 $ for at most $ k $ values or is identically $ 0 $ --> $ f_k(x) = 1 $ for at most $ k+1 $ values --> $ f_{k+1}(x) = 0 $ for at most $ k+1 $ values or is identically $ 0 $ --> $ f_{k+1}(x) = 1 $ for at most $ k+2 $ values.
--------------------
Practice Problem: Show that $ x^3-3x+b $ cannot have more than one zero in $ [-1,1] $, regardless of the value of $ b $.
Thursday, November 30, 2006
Wednesday, November 29, 2006
Balance Those p's. Topic: Algebra/Polynomials/S&S. Level: Olympiad.
Problem: (Stanford Putnam Practice) A finite sequence $ a_1, a_2, \ldots, a_n $ is called $ p $-balanced if any sum of the form $ a_k+a_{k+p}+a_{k+2p}+\cdots $ is the same for $ k = 1, 2, \ldots, p $. Prove that if a sequence with $ 50 $ members is $ p $-balanced for $ p = 3, 5, 7, 11, 13, 17 $, then all its members are equal to zero.
Solution: Let $ P(x) = a_{50}x^{49}+a_{49}x^{48}+\cdots+a_2x+a_1 $. Recall the strategy of isolating certain terms in a polynomial. Denoting $ \omega_p $ by the $ p $th root of unity, we have
$ \displaystyle a_1+a_{1+p}+a_{1+2p}+\cdots = \frac{1}{p} \sum_{i=0}^{p-1} P(\omega_p^i) $
because all of the terms will cancel out by the identity $ 1+\omega_p+\omega_p^2+\cdots+\omega_p^{p-1} = 0 $ except for the terms in which the power of the exponent is divisible by $ p $, in which case $ \omega_p^p = 1 $. Similarly, using the polynomials $ xP(x), x^2P(x), \ldots, x^{p-1}P(x) $ we obtain
$ \displaystyle a_k+a_{k+p}+a_{k+2p}+\cdots = \frac{1}{p} \sum_{i=0}^{p-1} \omega_p^{i(p+1-k)}P(\omega_p^i) $
from $ x^{p+1-k}P(x) $ and $ k = 2, 3, \ldots, p $. Hence we have a system of $ p $ equations in the variables
$ P(1), P(\omega_p), \ldots, P(\omega_p^{p-1}) $.
But if we have a system of $ p $ equations for $ p = 3, 5, 7, 11, 13, 17 $, we have a total of $ 3+5+7+11+13+17 = 56 $ equations. Since the only repeated value in the polynomial was $ 1 $ and it showed up $ 6 $ times, once for each prime, there are $ 56-5 = 51 $ distinct points on the polynomial that we now know. However, since a polynomial of degree $ 49 $ is defined by at most $ 50 $ points, it must be the zero polynomial, which means $ a_1 = a_2 = \cdots = a_{50} = 0 $, as desired. QED.
--------------------
Comment: This is an example of generating functions showing up when you least expect it, well not exactly, but it was cool anyway. The proof above is not completely rigorous because we would need to show that a polynomial of degree $ 49 $ actually cannot pass through all $ 51 $ of the points, but it's almost there.
--------------------
Practice Problem: Let $ p(n, k) $ denote the number of ways you can partition $ n $ into $ k $ parts. Show that
$ \displaystyle \sum_{k=1}^n p(n, k) = p(2n, n) $.
Also show that the number of ways $ n $ can be partitioned into unequal parts is equal to the number of ways $ n $ can be partitioned into odd parts.
Solution: Let $ P(x) = a_{50}x^{49}+a_{49}x^{48}+\cdots+a_2x+a_1 $. Recall the strategy of isolating certain terms in a polynomial. Denoting $ \omega_p $ by the $ p $th root of unity, we have
$ \displaystyle a_1+a_{1+p}+a_{1+2p}+\cdots = \frac{1}{p} \sum_{i=0}^{p-1} P(\omega_p^i) $
because all of the terms will cancel out by the identity $ 1+\omega_p+\omega_p^2+\cdots+\omega_p^{p-1} = 0 $ except for the terms in which the power of the exponent is divisible by $ p $, in which case $ \omega_p^p = 1 $. Similarly, using the polynomials $ xP(x), x^2P(x), \ldots, x^{p-1}P(x) $ we obtain
$ \displaystyle a_k+a_{k+p}+a_{k+2p}+\cdots = \frac{1}{p} \sum_{i=0}^{p-1} \omega_p^{i(p+1-k)}P(\omega_p^i) $
from $ x^{p+1-k}P(x) $ and $ k = 2, 3, \ldots, p $. Hence we have a system of $ p $ equations in the variables
$ P(1), P(\omega_p), \ldots, P(\omega_p^{p-1}) $.
But if we have a system of $ p $ equations for $ p = 3, 5, 7, 11, 13, 17 $, we have a total of $ 3+5+7+11+13+17 = 56 $ equations. Since the only repeated value in the polynomial was $ 1 $ and it showed up $ 6 $ times, once for each prime, there are $ 56-5 = 51 $ distinct points on the polynomial that we now know. However, since a polynomial of degree $ 49 $ is defined by at most $ 50 $ points, it must be the zero polynomial, which means $ a_1 = a_2 = \cdots = a_{50} = 0 $, as desired. QED.
--------------------
Comment: This is an example of generating functions showing up when you least expect it, well not exactly, but it was cool anyway. The proof above is not completely rigorous because we would need to show that a polynomial of degree $ 49 $ actually cannot pass through all $ 51 $ of the points, but it's almost there.
--------------------
Practice Problem: Let $ p(n, k) $ denote the number of ways you can partition $ n $ into $ k $ parts. Show that
$ \displaystyle \sum_{k=1}^n p(n, k) = p(2n, n) $.
Also show that the number of ways $ n $ can be partitioned into unequal parts is equal to the number of ways $ n $ can be partitioned into odd parts.
Tuesday, November 28, 2006
Divides Both! Topic: NT/S&S. Level: AIME.
Problem: (Stanford Putnam Practice) Let $ a_0 = 1 $ and $ a_{n+1} = a_n^2+1 $ for $ n \ge 0 $. Find $ \gcd(a_{2004},a_{999}) $.
Solution: Re-label starting with $ a_0 = 0 $, $ a_1 = 1 $ instead. We will prove a much stronger statement, that, $ \gcd(a_m, a_k) = a_{gcd(m,k)} $ for $ m, k \neq 0 $.
First, we will show a lemma that if $ q | a_r $ for positive integers $ r, q $ then $ a_{r+s} \equiv a_s \pmod{q} $ for $ s \ge 0 $. This is trivial by induction, since $ a_r \equiv a_0 \pmod{q} $, $ a_{r+s+1} \equiv a_{r+s}^2+1 \pmod{q} $, and $ a_{s+1} \equiv a_s^2+1 \pmod{q} $.
We will prove the final result by induction as well, on $ \max(m,k) $. When this is $ 1 $, we clearly have $ (a_1, a_1) = 1 = a_{\gcd(1,1)} $.
Assume the property is true for $ \max(m,k) \le p $. Now suppose $ \beta $ divides $ \gcd(a_m,a_{p+1}) $ with $ m < p+1 $. We must have $ a_m \equiv a_{p+1} \equiv 0 \pmod{\beta} $. But then by the lemma above we have $ \beta | a_m \Rightarrow a_{p+1} \equiv a_{p+1-m} \equiv 0 \pmod{\beta} $. So then
$ \gcd(a_m,a_{p+1}) = \gcd(a_m,a_{p+1-m}) $
with $ p+1-m \le p $. But by our inductive hypothesis, we know that
$ \gcd(a_m,a_{p+1-m}) = a_{\gcd(m,p+1-m)} = a_{\gcd(m,p+1)} = \gcd(a_m,a_{p+1}) $, as desired. Shifting the indices in the problem statement, we have
$ \gcd(a_{2005}, a_{1000}) = a_{\gcd(2005,1000)} = a_5 = 677 $. QED.
--------------------
Comment: Pretty classic as far as GCD of terms in a sequence go. It follows a similar property that the Fibonnacci sequence also has. Instead of using induction on the last step, another option was to invoke Euclid's Algorithm to arrive at $ \gcd(m,p+1) $ but that didn't seem quite as fulfilling.
--------------------
Practice Problem: (Stanford Putnam Practice) Define a sequence $ \{a_i\} $ by $ a_1 = 3 $ and $ a_{i+1} = 3^{a_i} $ for $ i \ge 1 $. Which integers between $ 00 $ and $ 99 $ occur as the last two digits of infinitely many $ a_i $?
Solution: Re-label starting with $ a_0 = 0 $, $ a_1 = 1 $ instead. We will prove a much stronger statement, that, $ \gcd(a_m, a_k) = a_{gcd(m,k)} $ for $ m, k \neq 0 $.
First, we will show a lemma that if $ q | a_r $ for positive integers $ r, q $ then $ a_{r+s} \equiv a_s \pmod{q} $ for $ s \ge 0 $. This is trivial by induction, since $ a_r \equiv a_0 \pmod{q} $, $ a_{r+s+1} \equiv a_{r+s}^2+1 \pmod{q} $, and $ a_{s+1} \equiv a_s^2+1 \pmod{q} $.
We will prove the final result by induction as well, on $ \max(m,k) $. When this is $ 1 $, we clearly have $ (a_1, a_1) = 1 = a_{\gcd(1,1)} $.
Assume the property is true for $ \max(m,k) \le p $. Now suppose $ \beta $ divides $ \gcd(a_m,a_{p+1}) $ with $ m < p+1 $. We must have $ a_m \equiv a_{p+1} \equiv 0 \pmod{\beta} $. But then by the lemma above we have $ \beta | a_m \Rightarrow a_{p+1} \equiv a_{p+1-m} \equiv 0 \pmod{\beta} $. So then
$ \gcd(a_m,a_{p+1}) = \gcd(a_m,a_{p+1-m}) $
with $ p+1-m \le p $. But by our inductive hypothesis, we know that
$ \gcd(a_m,a_{p+1-m}) = a_{\gcd(m,p+1-m)} = a_{\gcd(m,p+1)} = \gcd(a_m,a_{p+1}) $, as desired. Shifting the indices in the problem statement, we have
$ \gcd(a_{2005}, a_{1000}) = a_{\gcd(2005,1000)} = a_5 = 677 $. QED.
--------------------
Comment: Pretty classic as far as GCD of terms in a sequence go. It follows a similar property that the Fibonnacci sequence also has. Instead of using induction on the last step, another option was to invoke Euclid's Algorithm to arrive at $ \gcd(m,p+1) $ but that didn't seem quite as fulfilling.
--------------------
Practice Problem: (Stanford Putnam Practice) Define a sequence $ \{a_i\} $ by $ a_1 = 3 $ and $ a_{i+1} = 3^{a_i} $ for $ i \ge 1 $. Which integers between $ 00 $ and $ 99 $ occur as the last two digits of infinitely many $ a_i $?
Monday, November 27, 2006
To Converge Or Not To Converge. Topic: Real Analysis/S&S.
Problem: (Stanford Math 51H, Cauchy Root Test) Suppose there exists a $ \lambda \in (0,1) $ and an integer $ N \ge 1 $ such that $ |a_n|^{\frac{1}{n}} \le \lambda $ for all $ n \ge N $. Prove that $ \displaystyle \sum_{n=1}^{\infty} a_n $ converges.
Solution: Well let's just discard $ \displaystyle \sum_{n=1}^{N-1} a_n $ because it is finite and obviously converges. It remains to show that $ \displaystyle \sum_{n=N}^{\infty} a_n $ converges.
But then we have $ |a_n|^{\frac{1}{n}} \le \lambda \Rightarrow |a_n| \le \lambda^n $. So
$ \displaystyle \sum_{n=N}^{\infty} a_n \le \sum_{n=N}^{\infty} |a_n| \le \sum_{n=N}^{\infty} \lambda^n $.
The last summation, however, is a geometric series with common ratio $ 0 < \lambda < 1 $, so it converges. Hence our sum does as well. QED.
--------------------
Comment: The root test is, in effect, a comparison to a geometric series. The hypothesis is that we can bound $ |a_n| $ by a geometric series for all large $ n $, implying convergence.
--------------------
Practice Problem: (Stanford Math 51H) Discuss the convergence of
$ \displaystyle \sum_{n=1}^{\infty} \frac{\sin{\frac{1}{n}}}{n} $.
Solution: Well let's just discard $ \displaystyle \sum_{n=1}^{N-1} a_n $ because it is finite and obviously converges. It remains to show that $ \displaystyle \sum_{n=N}^{\infty} a_n $ converges.
But then we have $ |a_n|^{\frac{1}{n}} \le \lambda \Rightarrow |a_n| \le \lambda^n $. So
$ \displaystyle \sum_{n=N}^{\infty} a_n \le \sum_{n=N}^{\infty} |a_n| \le \sum_{n=N}^{\infty} \lambda^n $.
The last summation, however, is a geometric series with common ratio $ 0 < \lambda < 1 $, so it converges. Hence our sum does as well. QED.
--------------------
Comment: The root test is, in effect, a comparison to a geometric series. The hypothesis is that we can bound $ |a_n| $ by a geometric series for all large $ n $, implying convergence.
--------------------
Practice Problem: (Stanford Math 51H) Discuss the convergence of
$ \displaystyle \sum_{n=1}^{\infty} \frac{\sin{\frac{1}{n}}}{n} $.
Saturday, November 25, 2006
Stirling Silver. Topic: Calculus.
Problem: Evaluate $ \displaystyle \lim_{n \rightarrow \infty} \frac{k^n \cdot n!}{n^n} $ where $ k $ is an arbitrary positive real number.
Solution: Well in its current form the limit does not look very fun, so let's take the natural log of it.
$ \displaystyle \ln{L_k} = \lim_{n \rightarrow \infty} \left(n \ln{k}+\sum_{i=1}^n \ln{i}-n\ln{n}\right) $
where $ L_k $ is the limit in question. Let's bound the summation term on the RHS using integrals. We see that
$ \displaystyle \int_1^n \ln{x} dx \le \sum_{i=1}^n \ln{i} \le \int_1^n \ln{x}dx + \ln{n} $
$ \displaystyle n\ln{n}-n \le \sum_{i=1}^n \ln{i} \le n\ln{n}-n+\ln{n} $
because $ \displaystyle \int \ln{x} dx = x\ln{x}-x $. Plugging this back into the equation, we get
$ \displaystyle \lim_{n \rightarrow \infty} n(\ln{k}-1) \le \ln{L_k} \le \lim_{n \rightarrow \infty} n(\ln{k}-1)+\ln{n} $.
For $ k < e $, the coefficient of the $ n $ term is negative and it dominates the $ \ln{n} $ term, so as $ n \rightarrow \infty $ both the upper and lower bounds approach $ - \infty $, hence $ \ln{L_k} \rightarrow -\infty \Rightarrow L_k \rightarrow 0 $.
For $ k > e $, the coefficient on the $ n $ term is positive, so both the upper and lower bounds approach $ \infty $ so $ \ln{L_k} \rightarrow \infty \Rightarrow L_k \rightarrow \infty $.
Now for the trickier case $ k = e $. Consider a midpoint approximation of the area under the curve $ \ln{x} $ from $ \frac{1}{2} $ to $ n+\frac{1}{2} $. Since $ \ln{x} $ is concave, the midpoint approximation will be larger than the integral (draw a picture to see this). So
$ \displaystyle \int_{\frac{1}{2}}^{n+\frac{1}{2}} \ln{x}dx \le \sum_{i=1}^n \ln{i} $.
But the LHS turns out to be
$ \left(n+\frac{1}{2}\right)\ln{\left(n+\frac{1}{2}\right)}-\left(n+\frac{1}{2}\right)-\frac{1}{2}\ln{\frac{1}{2}}+\frac{1}{2} $
which we can bound below by
$ n\ln{n}+\frac{1}{2}\ln{n}-n+C $
for some arbitrary constant $ C $. Then
$ \displaystyle \sum_{i=1}^n \ln{i} \ge n\ln{n}+\frac{1}{2}\ln{n}-n+C $.
Plugging back into our original expression for $ \ln{L_k} $, we obtain
$ \ln{L_e} \ge \lim_{n\rightarrow \infty} (n+n\ln{n}+\frac{1}{2}\ln{n}-n+C-n\ln{n}) = \lim_{n\rightarrow \infty} (\frac{1}{2}\ln{n}+C) = \infty $
so $ L_e \rightarrow \infty $ as well. Hence if $ k < e $, the limit converges to zero and if $ k \ge e $ the limit diverges to $ \infty $. QED.
--------------------
Comment: This limit it very interesting because it gives us an idea about Stirling's approximation. In fact, from the last inequality, we see that $ L_e \sim \sqrt{n} $ and we have
$ n! \sim \left(\frac{n}{e}\right)^n \cdot \sqrt{n} $.
A very good approximation is
$ n! \approx \sqrt{\pi \cdot \left(2n+\frac{1}{3}\right)} \cdot \left(\frac{n}{e}\right)^n $.
Solution: Well in its current form the limit does not look very fun, so let's take the natural log of it.
$ \displaystyle \ln{L_k} = \lim_{n \rightarrow \infty} \left(n \ln{k}+\sum_{i=1}^n \ln{i}-n\ln{n}\right) $
where $ L_k $ is the limit in question. Let's bound the summation term on the RHS using integrals. We see that
$ \displaystyle \int_1^n \ln{x} dx \le \sum_{i=1}^n \ln{i} \le \int_1^n \ln{x}dx + \ln{n} $
$ \displaystyle n\ln{n}-n \le \sum_{i=1}^n \ln{i} \le n\ln{n}-n+\ln{n} $
because $ \displaystyle \int \ln{x} dx = x\ln{x}-x $. Plugging this back into the equation, we get
$ \displaystyle \lim_{n \rightarrow \infty} n(\ln{k}-1) \le \ln{L_k} \le \lim_{n \rightarrow \infty} n(\ln{k}-1)+\ln{n} $.
For $ k < e $, the coefficient of the $ n $ term is negative and it dominates the $ \ln{n} $ term, so as $ n \rightarrow \infty $ both the upper and lower bounds approach $ - \infty $, hence $ \ln{L_k} \rightarrow -\infty \Rightarrow L_k \rightarrow 0 $.
For $ k > e $, the coefficient on the $ n $ term is positive, so both the upper and lower bounds approach $ \infty $ so $ \ln{L_k} \rightarrow \infty \Rightarrow L_k \rightarrow \infty $.
Now for the trickier case $ k = e $. Consider a midpoint approximation of the area under the curve $ \ln{x} $ from $ \frac{1}{2} $ to $ n+\frac{1}{2} $. Since $ \ln{x} $ is concave, the midpoint approximation will be larger than the integral (draw a picture to see this). So
$ \displaystyle \int_{\frac{1}{2}}^{n+\frac{1}{2}} \ln{x}dx \le \sum_{i=1}^n \ln{i} $.
But the LHS turns out to be
$ \left(n+\frac{1}{2}\right)\ln{\left(n+\frac{1}{2}\right)}-\left(n+\frac{1}{2}\right)-\frac{1}{2}\ln{\frac{1}{2}}+\frac{1}{2} $
which we can bound below by
$ n\ln{n}+\frac{1}{2}\ln{n}-n+C $
for some arbitrary constant $ C $. Then
$ \displaystyle \sum_{i=1}^n \ln{i} \ge n\ln{n}+\frac{1}{2}\ln{n}-n+C $.
Plugging back into our original expression for $ \ln{L_k} $, we obtain
$ \ln{L_e} \ge \lim_{n\rightarrow \infty} (n+n\ln{n}+\frac{1}{2}\ln{n}-n+C-n\ln{n}) = \lim_{n\rightarrow \infty} (\frac{1}{2}\ln{n}+C) = \infty $
so $ L_e \rightarrow \infty $ as well. Hence if $ k < e $, the limit converges to zero and if $ k \ge e $ the limit diverges to $ \infty $. QED.
--------------------
Comment: This limit it very interesting because it gives us an idea about Stirling's approximation. In fact, from the last inequality, we see that $ L_e \sim \sqrt{n} $ and we have
$ n! \sim \left(\frac{n}{e}\right)^n \cdot \sqrt{n} $.
A very good approximation is
$ n! \approx \sqrt{\pi \cdot \left(2n+\frac{1}{3}\right)} \cdot \left(\frac{n}{e}\right)^n $.
Thursday, November 23, 2006
Limitten. Topic: Real Analysis.
Definition: A sequence $ \{a_n\} $ has limit $ L $, where $ L $ is a given real number, if for each $ \epsilon > 0 $ there is an integer $ N \ge 1 $ such that
$ |a_n-L| < \epsilon $ for all integers $ n \ge N $.
--------------------
Problem: (Stanford Math 51H) Prove that a sequence $ \{a_n\} $ cannot have more than one limit.
Solution: This is logically true, as usual, but a rigorous argument is much more fun.
Suppose $ \lim a_n = L_1 $ and $ \lim a_n = L_2 $. Letting $ \epsilon = \frac{|L_1-L_2|}{2} $, we know there exists $ N_1, N_2 $ such that
$ |a_n-L_1| < \epsilon $ for $ n \ge N_1 $
$ |a_n-L_2| < \epsilon $ for $ n \ge N_2 $.
So the above two inequalities are true for $ n \ge \max(N_1, N_2) $. Adding the two inequalities together, we get
$ |a_n-L_1|+|a_n-L_2| < 2\epsilon = |L_1-L_2| $.
However, by the triangle inequality we know $ |a|+|b| \ge |a-b| $ (by setting $ b = -b $ is the usual one), so
$ |a_n-L_1|+|a_n-L_2| \ge |L_2-L_1| = |L_1-L_2| $.
Then
$ |L_1-L_2| < |L_1-L_2| $,
a contradiction. Hence $ \{a_n\} $ can have at most one limit. QED.
--------------------
Comment: The real definition of the limit is almost always complete overlooked in a regular calculus course (i.e. Calc AB and BC). But it's pretty much the foundation of all of calculus so it is really quite nice to know and work with.
--------------------
Practice Problem: (Stanford Math 51H, Sandwich Theorem) If $ \{a_n\}, \{b_n\} $ are given convergent sequences with $ \lim a_n = \lim b_n $, and if $ \{c_n\} $ is any sequence such that $ a_n \le c_n \le b_n \forall n \ge 1 $, prove that $ \{c_n\} $ is convergent and $ \lim c_n = \lim a_n = \lim b_n $.
$ |a_n-L| < \epsilon $ for all integers $ n \ge N $.
--------------------
Problem: (Stanford Math 51H) Prove that a sequence $ \{a_n\} $ cannot have more than one limit.
Solution: This is logically true, as usual, but a rigorous argument is much more fun.
Suppose $ \lim a_n = L_1 $ and $ \lim a_n = L_2 $. Letting $ \epsilon = \frac{|L_1-L_2|}{2} $, we know there exists $ N_1, N_2 $ such that
$ |a_n-L_1| < \epsilon $ for $ n \ge N_1 $
$ |a_n-L_2| < \epsilon $ for $ n \ge N_2 $.
So the above two inequalities are true for $ n \ge \max(N_1, N_2) $. Adding the two inequalities together, we get
$ |a_n-L_1|+|a_n-L_2| < 2\epsilon = |L_1-L_2| $.
However, by the triangle inequality we know $ |a|+|b| \ge |a-b| $ (by setting $ b = -b $ is the usual one), so
$ |a_n-L_1|+|a_n-L_2| \ge |L_2-L_1| = |L_1-L_2| $.
Then
$ |L_1-L_2| < |L_1-L_2| $,
a contradiction. Hence $ \{a_n\} $ can have at most one limit. QED.
--------------------
Comment: The real definition of the limit is almost always complete overlooked in a regular calculus course (i.e. Calc AB and BC). But it's pretty much the foundation of all of calculus so it is really quite nice to know and work with.
--------------------
Practice Problem: (Stanford Math 51H, Sandwich Theorem) If $ \{a_n\}, \{b_n\} $ are given convergent sequences with $ \lim a_n = \lim b_n $, and if $ \{c_n\} $ is any sequence such that $ a_n \le c_n \le b_n \forall n \ge 1 $, prove that $ \{c_n\} $ is convergent and $ \lim c_n = \lim a_n = \lim b_n $.
Wednesday, November 22, 2006
Root Beer. Topic: Real Analysis.
Problem: (Stanford Math 51H) Prove that every positive real number has a positive square root (That is, for any $ b > 0 $, prove that there is a real number $ \alpha $ such that $ \alpha^2 = b $). [Usual properties of the integers are assumed.]
Solution: Consider the set $ S = \{x \in \mathbb{R}: x > 0, x^2 < b\} $. We can show that $ S $ is non-empty by selecting an integer $ n $ large enough such that $ \frac{1}{n^2} < b $. Since $ b $ is a real and the integers are unbounded, there exists a positive integer $ k $ such that $ k^2 > b $, thus $ x^2 < k^2 \Rightarrow x < k $ so $ S $ is bounded from above.
Now there must exist $ \alpha $ such that $ \sup S = \alpha $. We claim that $ \alpha^2 = b $. Suppose the contrary; then either $ \alpha^2 < b $ or $ \alpha^2 > b $.
CASE 1: If $ \alpha^2 < b $, then consider $ \left(\alpha+\frac{1}{n}\right)^2 = \alpha^2+\frac{2\alpha}{n}+\frac{1}{n^2} $. Choose $ n $ large enough such that
$ \alpha^2+\frac{2\alpha}{n}+\frac{1}{n^2} < b $
which is definitely true for any $ n > \frac{2\alpha+1}{b-\alpha^2} $. But then $ \alpha+\frac{1}{n} \in S $ and $ \alpha+\frac{1}{n} > \alpha = \sup S $, contradicting the fact that $ \alpha $ is the supremum.
CASE 2: If $ \alpha^2 > b $, then consider $ \left(\alpha-\frac{1}{n}\right)^2 = \alpha^2-\frac{2\alpha}{n}+\frac{1}{n^2} $. Again, choose $ n $ large enough such that
$ \alpha^2-\frac{2\alpha}{n}+\frac{1}{n^2} > b $
which we know is true for $ n > \frac{2\alpha}{\alpha^2-b} $ (furthermore, we impose the restriction $ \alpha > \frac{1}{n} $ so our resulting real is positive). Then $ x \le \alpha-\frac{1}{n} $ for all $ x \in S $ and $ \alpha-\frac{1}{n} < \alpha = \sup S $, contradicting the fact that $ \alpha $ is the supremum.
Hence we know that $ \alpha^2 = b $, or that the square root of any positive real number exists and is a positive real number. QED.
--------------------
Comment: This is from the real analysis portion of a freshman honors calculus course, i.e. a rigorous treatment of the real numbers which is the basis for calculus like limits and stuff. Really understanding calculus involves really understanding how the real number system works.
--------------------
Practice Problem: (Stanford Math 51H) If $ a, b \in \mathbb{R} $ with $ a < b $, prove:
(a) There is a rational $ r \in (a,b) $.
(b) There is an irrational $ c \in (a,b) $.
(c) $ (a,b) $ contains infinitely many rationals and infinitely many irrationals.
Solution: Consider the set $ S = \{x \in \mathbb{R}: x > 0, x^2 < b\} $. We can show that $ S $ is non-empty by selecting an integer $ n $ large enough such that $ \frac{1}{n^2} < b $. Since $ b $ is a real and the integers are unbounded, there exists a positive integer $ k $ such that $ k^2 > b $, thus $ x^2 < k^2 \Rightarrow x < k $ so $ S $ is bounded from above.
Now there must exist $ \alpha $ such that $ \sup S = \alpha $. We claim that $ \alpha^2 = b $. Suppose the contrary; then either $ \alpha^2 < b $ or $ \alpha^2 > b $.
CASE 1: If $ \alpha^2 < b $, then consider $ \left(\alpha+\frac{1}{n}\right)^2 = \alpha^2+\frac{2\alpha}{n}+\frac{1}{n^2} $. Choose $ n $ large enough such that
$ \alpha^2+\frac{2\alpha}{n}+\frac{1}{n^2} < b $
which is definitely true for any $ n > \frac{2\alpha+1}{b-\alpha^2} $. But then $ \alpha+\frac{1}{n} \in S $ and $ \alpha+\frac{1}{n} > \alpha = \sup S $, contradicting the fact that $ \alpha $ is the supremum.
CASE 2: If $ \alpha^2 > b $, then consider $ \left(\alpha-\frac{1}{n}\right)^2 = \alpha^2-\frac{2\alpha}{n}+\frac{1}{n^2} $. Again, choose $ n $ large enough such that
$ \alpha^2-\frac{2\alpha}{n}+\frac{1}{n^2} > b $
which we know is true for $ n > \frac{2\alpha}{\alpha^2-b} $ (furthermore, we impose the restriction $ \alpha > \frac{1}{n} $ so our resulting real is positive). Then $ x \le \alpha-\frac{1}{n} $ for all $ x \in S $ and $ \alpha-\frac{1}{n} < \alpha = \sup S $, contradicting the fact that $ \alpha $ is the supremum.
Hence we know that $ \alpha^2 = b $, or that the square root of any positive real number exists and is a positive real number. QED.
--------------------
Comment: This is from the real analysis portion of a freshman honors calculus course, i.e. a rigorous treatment of the real numbers which is the basis for calculus like limits and stuff. Really understanding calculus involves really understanding how the real number system works.
--------------------
Practice Problem: (Stanford Math 51H) If $ a, b \in \mathbb{R} $ with $ a < b $, prove:
(a) There is a rational $ r \in (a,b) $.
(b) There is an irrational $ c \in (a,b) $.
(c) $ (a,b) $ contains infinitely many rationals and infinitely many irrationals.
Monday, November 20, 2006
Function After Function. Topic: Algebra/Calculus. Level: AIME.
Problem: (1985 Putnam - B2) Define polynomials $ f_n(x) $ for $ n \ge 0 $ by $ f_0(x) = 1 $, $ f_n(0) = 0 $ for $ n \ge 1 $, and
$ \frac{d}{dx} f_{n+1}(x) = (n+1)f_n(x+1) $
for $ n \ge 0 $. Find, with proof, the explicit factorization of $ f_{100}(1) $ into powers of distinct primes.
Solution: As usual, we try a few small cases and look for an induction (this is important to get in the habit of, especially when an arbitrary large number is given, i.e. $ 100 $). First, convert the derivative into an integral; we implicitly use the initial conditions without actually citing them in the integration. So we find that
$ \displaystyle f_1(x) = \int f_0(x+1)dx = \int 1dx = x $
$ \displaystyle f_2(x) = 2 \int f_1(x+1)dx = 2 \int (x+1)dx = x^2+2x = x(x+2) $
$ \displaystyle f_3(x) = 3 \int f_2(x+1)dx = 3 \int (x^2+4x+3) dx = x^3+6x^2+9x = x(x+3)^2 $
$ \displaystyle f_4(x) = 4 \int f_3(x+1)dx = 4 \int (x^3+9x^2+24x+16)dx = x(x+4)^3 $.
By this point, you hope you have figured the pattern out because the next calculations get pretty nasty and we don't want to go there. So we conjecture, naturally, that $ f_n(x) = x(x+n)^{n-1} $ for all $ n \ge 1 $. We can show this by induction, with the base cases given above.
$ \displaystyle f_{n+1}(x) = (n+1) \int f_n(x+1)dx = (n+1) \int (x+1)(x+n+1)^{n-1} dx $.
So... how do we integrate this. In fact, a common strategy can be applied here: whenever some function of $ x $ is taken to a power, try substituting that function of $ x $ out. Here, we use $ u = x+n+1 $ so $ du = dx $. The new integral becomes
$ \displaystyle f_{n+1}(x) = (n+1) \int (u-n)u^{n-1}du = (n+1) \int (u^n-nu^{n-1})du = u^{n+1}-(n+1)u^n $.
Factoring and substituting back, we get
$ f_{n+1}(x) = u^n(u-n-1) = (x+n+1)^n(x+n+1-n-1) = x(x+n+1)^n $,
completing the induction. Hence
$ f_n(x) = x(x+n)^{n-1} \Rightarrow f_{100}(1) = 101^{99} $,
which is, incidentally, the prime factorization. QED.
--------------------
Comment: A really cool problem involving function recursion and a nice induction. Remember that whenever you see a recursion induction is the way to go!
--------------------
Practice Problem: Solve the equation $ 2z^3-(5+6i)z^2+9iz+(1-3i) = 0 $ knowing that one of the solutions is real.
$ \frac{d}{dx} f_{n+1}(x) = (n+1)f_n(x+1) $
for $ n \ge 0 $. Find, with proof, the explicit factorization of $ f_{100}(1) $ into powers of distinct primes.
Solution: As usual, we try a few small cases and look for an induction (this is important to get in the habit of, especially when an arbitrary large number is given, i.e. $ 100 $). First, convert the derivative into an integral; we implicitly use the initial conditions without actually citing them in the integration. So we find that
$ \displaystyle f_1(x) = \int f_0(x+1)dx = \int 1dx = x $
$ \displaystyle f_2(x) = 2 \int f_1(x+1)dx = 2 \int (x+1)dx = x^2+2x = x(x+2) $
$ \displaystyle f_3(x) = 3 \int f_2(x+1)dx = 3 \int (x^2+4x+3) dx = x^3+6x^2+9x = x(x+3)^2 $
$ \displaystyle f_4(x) = 4 \int f_3(x+1)dx = 4 \int (x^3+9x^2+24x+16)dx = x(x+4)^3 $.
By this point, you hope you have figured the pattern out because the next calculations get pretty nasty and we don't want to go there. So we conjecture, naturally, that $ f_n(x) = x(x+n)^{n-1} $ for all $ n \ge 1 $. We can show this by induction, with the base cases given above.
$ \displaystyle f_{n+1}(x) = (n+1) \int f_n(x+1)dx = (n+1) \int (x+1)(x+n+1)^{n-1} dx $.
So... how do we integrate this. In fact, a common strategy can be applied here: whenever some function of $ x $ is taken to a power, try substituting that function of $ x $ out. Here, we use $ u = x+n+1 $ so $ du = dx $. The new integral becomes
$ \displaystyle f_{n+1}(x) = (n+1) \int (u-n)u^{n-1}du = (n+1) \int (u^n-nu^{n-1})du = u^{n+1}-(n+1)u^n $.
Factoring and substituting back, we get
$ f_{n+1}(x) = u^n(u-n-1) = (x+n+1)^n(x+n+1-n-1) = x(x+n+1)^n $,
completing the induction. Hence
$ f_n(x) = x(x+n)^{n-1} \Rightarrow f_{100}(1) = 101^{99} $,
which is, incidentally, the prime factorization. QED.
--------------------
Comment: A really cool problem involving function recursion and a nice induction. Remember that whenever you see a recursion induction is the way to go!
--------------------
Practice Problem: Solve the equation $ 2z^3-(5+6i)z^2+9iz+(1-3i) = 0 $ knowing that one of the solutions is real.
Sunday, November 19, 2006
Saturday, November 18, 2006
The Matrix?! Topic: Calculus/Linear Algebra.
Definition: A square matrix $ A $ is diagonalizable if there exist matrices $ V, D $ such that $ V $ is invertible, $ D $ is diagonal (has only entries on the main diagonal), and $ A = VDV^{-1} $.
--------------------
Problem: Solve the differential equation $ x^{\prime}(t) = Ax(t) $ where $ A $ is a diagonalizable $ n \times n $ square matrix and $ x(t) $ is a vector valued function (which must have $ n $ components).
Solution: Well this is a lot like the regular differential equation $ y^{\prime} = ky $ which has solution $ y = y_0 e^{kt} $. So let's guess that
$ x(t) = e^{At}x(0) $
is the solution. But what exactly does $ e^{At} $ mean? Consider the following.
--------------------
Definition: If $ f $ is a function and $ D $ is a diagonal square matrix, then we say that $ f(D) $ is simply the matrix with $ f $ applied to all the elements on the diagonal.
If $ A $ is diagonalizable, then we have $ A = VDV^{-1} $ and we define $ f(A) = Vf(D)V^{-1} $.
--------------------
So does $ x(t) = e^{At}x(0) $ work as a solution to the differential equation? It turns out that, by using the definition of the derivative we can show that
$ \frac{d}{dt}[e^{Dt}] = De^{Dt} $
and
$ \frac{d}{dt}[e^{At}x(0)] = VDe^{Dt}V^{-1}x(0) $.
However, if we use the function $ g(x) = xe^{xt} $, we get
$ Ae^{At}x(0) = g(A)x(0) = Vg(D)V^{-1}x(0) = VDe^{Dt}V^{-1}x(0) $,
so $ x^{\prime}(t) = \frac{d}{dt}[e^{At}x(0)] = Ae^{At}x(0) = Ax(t) $,
as desired. QED.
--------------------
Comment: Applying functions to matrices is a super cool thing and being able to solve differential equations with matrix coefficients is even better. Linear algebra provides all sorts of new ways to work with matrices.
--------------------
Practice Problem: Show that all symmetric square matrices are diagonalizable.
--------------------
Problem: Solve the differential equation $ x^{\prime}(t) = Ax(t) $ where $ A $ is a diagonalizable $ n \times n $ square matrix and $ x(t) $ is a vector valued function (which must have $ n $ components).
Solution: Well this is a lot like the regular differential equation $ y^{\prime} = ky $ which has solution $ y = y_0 e^{kt} $. So let's guess that
$ x(t) = e^{At}x(0) $
is the solution. But what exactly does $ e^{At} $ mean? Consider the following.
--------------------
Definition: If $ f $ is a function and $ D $ is a diagonal square matrix, then we say that $ f(D) $ is simply the matrix with $ f $ applied to all the elements on the diagonal.
If $ A $ is diagonalizable, then we have $ A = VDV^{-1} $ and we define $ f(A) = Vf(D)V^{-1} $.
--------------------
So does $ x(t) = e^{At}x(0) $ work as a solution to the differential equation? It turns out that, by using the definition of the derivative we can show that
$ \frac{d}{dt}[e^{Dt}] = De^{Dt} $
and
$ \frac{d}{dt}[e^{At}x(0)] = VDe^{Dt}V^{-1}x(0) $.
However, if we use the function $ g(x) = xe^{xt} $, we get
$ Ae^{At}x(0) = g(A)x(0) = Vg(D)V^{-1}x(0) = VDe^{Dt}V^{-1}x(0) $,
so $ x^{\prime}(t) = \frac{d}{dt}[e^{At}x(0)] = Ae^{At}x(0) = Ax(t) $,
as desired. QED.
--------------------
Comment: Applying functions to matrices is a super cool thing and being able to solve differential equations with matrix coefficients is even better. Linear algebra provides all sorts of new ways to work with matrices.
--------------------
Practice Problem: Show that all symmetric square matrices are diagonalizable.
Thursday, November 16, 2006
Gammazzz. Topic: Calculus/S&S.
Definition: (Gamma function) $ \displaystyle \Gamma(z) = \int_0^{\infty} t^{z-1}e^{-t} dt = \lim_{n \rightarrow \infty} \frac{n! \cdot n^z}{z(z+1)(z+2) \cdots (z+n)} $. Note that the Gamma function satisfies the property $ \Gamma(z+1) = z\Gamma(z) $ and $ \Gamma(k+1) = k! $ for nonnegative integers $ k $.
--------------------
Problem: Prove that $ \Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.
Solution: We use the second definition for $ \Gamma $ given above to evaluate
$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{n! \cdot n^z}{z(z+1)(z+2) \cdots (z+n)} \cdot \lim_{n \rightarrow \infty} \frac{n! \cdot n^{-z}}{-z(-z+1)(-z+2) \cdots (-z+n)} $.
Divide the top and bottom by $ n! $ in both limits to obtain
$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{ n^z}{z(1+\frac{z}{1})(1+\frac{z}{2}) \cdots (1+\frac{z}{n})} \cdot \lim_{n \rightarrow \infty} \frac{n^{-z}}{-z(1-\frac{z}{1})(1-\frac{z}{2}) \cdots (1-\frac{z}{n})} $.
Multiplying these limits together, we get
$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{ 1}{-z^2(1+\frac{z}{1})(1-\frac{z}{1})(1+\frac{z}{2})(1-\frac{z}{2}) \cdots (1+\frac{z}{n})(1-\frac{z}{n})} $.
Notice, however, that
$ \displaystyle \sin{\pi z} = \pi z \cdot \prod_{n=1}^{\infty} \left(1+\frac{z}{n}\right)\left(1-\frac{z}{n}\right) $
because it has zeros at all of the integers and has first term $ \pi z $ (by Taylor series). So
$ \displaystyle \Gamma(-z) \Gamma(z) = \frac{ 1}{-z \cdot \frac{\sin{\pi z}}{\pi}} $
and
$ \displaystyle -z \Gamma(-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.
Lastly, by the property $ \Gamma(1-z) = -z\Gamma(-z) $, we arrive at the identity
$ \Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.
QED.
--------------------
Comment: The Gamma function is quite a useful function and actually shows up a lot in higher-level math. The proof above assumes the equivalence of the two forms of the function, which can be proven using the fact that the Gamma function is log-convex. In fact, in can be shown that only one such function exists that satisfies $ f(x+1) = xf(x) $ and is log-convex.
--------------------
Practice Problem: Show that $ \Gamma(x) $ is log-convex for $ x > 0 $, i.e. $ \log{\Gamma(tx_1+(1-t)x_2)} \le t\log{\Gamma(x_1)}+(1-t)\log{\Gamma(x_2)} $ for all $ 0 \le t \le 1 $.
--------------------
Problem: Prove that $ \Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.
Solution: We use the second definition for $ \Gamma $ given above to evaluate
$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{n! \cdot n^z}{z(z+1)(z+2) \cdots (z+n)} \cdot \lim_{n \rightarrow \infty} \frac{n! \cdot n^{-z}}{-z(-z+1)(-z+2) \cdots (-z+n)} $.
Divide the top and bottom by $ n! $ in both limits to obtain
$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{ n^z}{z(1+\frac{z}{1})(1+\frac{z}{2}) \cdots (1+\frac{z}{n})} \cdot \lim_{n \rightarrow \infty} \frac{n^{-z}}{-z(1-\frac{z}{1})(1-\frac{z}{2}) \cdots (1-\frac{z}{n})} $.
Multiplying these limits together, we get
$ \displaystyle \Gamma(-z) \Gamma(z) = \lim_{n \rightarrow \infty} \frac{ 1}{-z^2(1+\frac{z}{1})(1-\frac{z}{1})(1+\frac{z}{2})(1-\frac{z}{2}) \cdots (1+\frac{z}{n})(1-\frac{z}{n})} $.
Notice, however, that
$ \displaystyle \sin{\pi z} = \pi z \cdot \prod_{n=1}^{\infty} \left(1+\frac{z}{n}\right)\left(1-\frac{z}{n}\right) $
because it has zeros at all of the integers and has first term $ \pi z $ (by Taylor series). So
$ \displaystyle \Gamma(-z) \Gamma(z) = \frac{ 1}{-z \cdot \frac{\sin{\pi z}}{\pi}} $
and
$ \displaystyle -z \Gamma(-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.
Lastly, by the property $ \Gamma(1-z) = -z\Gamma(-z) $, we arrive at the identity
$ \Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin{\pi z}} $.
QED.
--------------------
Comment: The Gamma function is quite a useful function and actually shows up a lot in higher-level math. The proof above assumes the equivalence of the two forms of the function, which can be proven using the fact that the Gamma function is log-convex. In fact, in can be shown that only one such function exists that satisfies $ f(x+1) = xf(x) $ and is log-convex.
--------------------
Practice Problem: Show that $ \Gamma(x) $ is log-convex for $ x > 0 $, i.e. $ \log{\Gamma(tx_1+(1-t)x_2)} \le t\log{\Gamma(x_1)}+(1-t)\log{\Gamma(x_2)} $ for all $ 0 \le t \le 1 $.
Tuesday, November 14, 2006
You Have Been Numberized. Topic: Calculus/Statistics.
Introduction: Consider devising a rating system in which matches consisting of an arbitrary number of "players" are "played." After each match, the rating of each player should reflect the position of that player among all competitors.
Solution: Obviously, this is only an idea. There's not a set "solution" to his. Let the players be $ P_1, P_2, \ldots, P_n $ with corresponding ratings $ R_1, R_2, \ldots, R_n $ and volatilities $ V_1, V_2, \ldots, V_n $. We will let the initial values of $ R_i $ be $ 1000 $ and the initial values of $ V_i $ be $ 100 $. Basically, the performance of player $ P_i $ will be a random variable with normal distribution with mean $ R_i $ and standard deviation $ V_i $. The performance distribution will be
$ S_i(x) = \frac{1}{V_i \sqrt{2\pi}} e^{-\frac{(x-R_i)^2}{2V_i^2}} $.
We let $ \displaystyle D_i(x) = \int_{-\infty}^x S_i(t) dt $ be the cumulative distribution.
Assume $ k $ players $ P_1, P_2, \ldots, P_k $ participate in a match. We will recalculate ratings according to the following steps:
(1) Find the weight of the competition, depending on the total rating of the participants. We would like it to take on values between zero and one, so try something like
$ W = \left(\frac{\sum_{i=1}^k R_i}{\sum_{i=1}^n R_i}\right)^{\frac{1}{C}} $,
for a suitable constant $ C > 1 $. Basically, this is small when there are not a lot of players, but increases rapidly as the rating of the participants approaches the total rating.
(2) For a player $ P_m $, we will calculate his expected rank based on the normal distribution. Let $ P(m, j) $ be the probability that $ P_m $ loses to $ P_j $. It turns out to be
$ \displaystyle P(m,j) = \int_{-\infty}^{\infty} S_j(x) \cdot D_m(x) dx $.
So if we sum up the probabilities that $ P_m $ will lose to each player (including himself), we get the expected rank $ E_m $ of $ P_m $, or
$ \displaystyle E_m = 0.5+\sum_{j=1}^k P(m,j) $,
where the $ 0.5 $ is to shift the best ranking to $ 1 $.
(3) The actual rank of $ P_m $ is $ A_m $, and we want to base rating changes on the difference $ E_m-A_m $. If $ I_m $ is the number of competitions $ P_m $ has participated in, calculate the new $ R^{\prime}_m $ as
$ R^{\prime}_m = R_m+\frac{WK}{I_m+1}(E_m-A_m) $,
where $ K $ is a suitable positive constant for equating differences in ranking to differences in rating.
(4) We now update the volatility $ V_m $. If the player's performance is relatively close to the expected performance, we want $ V_m $ to decrease to imply stability of the player. On the other hand, if the player performs exceptionally well or poorly, we want $ V_m $ to increase to reflect inconsistency. We propose the following change:
$ V^{\prime}_m = V_m+WQ_1(|E_m-A_m|-Q) $,
where $ Q_1, Q_2 $ are suitable positive constants. $ Q_1 $ determines the magnitude of volatility change and $ Q_2 $ determines the threshold for which performance is considered relatively constant.
All in all, this rating system depends a lot on experimentally finding the constants to put into working use. Otherwise, it seems like it would work.
--------------------
Comment: Rating systems are always controversial and people are always trying to devise ways to get around it. On a lighter note, many restrictions can be added like a rating change cap or a volatility cap. Stabilizers for higher ratings are also a possibility (i.e. reducing the weight of each match). Any thoughts?
Solution: Obviously, this is only an idea. There's not a set "solution" to his. Let the players be $ P_1, P_2, \ldots, P_n $ with corresponding ratings $ R_1, R_2, \ldots, R_n $ and volatilities $ V_1, V_2, \ldots, V_n $. We will let the initial values of $ R_i $ be $ 1000 $ and the initial values of $ V_i $ be $ 100 $. Basically, the performance of player $ P_i $ will be a random variable with normal distribution with mean $ R_i $ and standard deviation $ V_i $. The performance distribution will be
$ S_i(x) = \frac{1}{V_i \sqrt{2\pi}} e^{-\frac{(x-R_i)^2}{2V_i^2}} $.
We let $ \displaystyle D_i(x) = \int_{-\infty}^x S_i(t) dt $ be the cumulative distribution.
Assume $ k $ players $ P_1, P_2, \ldots, P_k $ participate in a match. We will recalculate ratings according to the following steps:
(1) Find the weight of the competition, depending on the total rating of the participants. We would like it to take on values between zero and one, so try something like
$ W = \left(\frac{\sum_{i=1}^k R_i}{\sum_{i=1}^n R_i}\right)^{\frac{1}{C}} $,
for a suitable constant $ C > 1 $. Basically, this is small when there are not a lot of players, but increases rapidly as the rating of the participants approaches the total rating.
(2) For a player $ P_m $, we will calculate his expected rank based on the normal distribution. Let $ P(m, j) $ be the probability that $ P_m $ loses to $ P_j $. It turns out to be
$ \displaystyle P(m,j) = \int_{-\infty}^{\infty} S_j(x) \cdot D_m(x) dx $.
So if we sum up the probabilities that $ P_m $ will lose to each player (including himself), we get the expected rank $ E_m $ of $ P_m $, or
$ \displaystyle E_m = 0.5+\sum_{j=1}^k P(m,j) $,
where the $ 0.5 $ is to shift the best ranking to $ 1 $.
(3) The actual rank of $ P_m $ is $ A_m $, and we want to base rating changes on the difference $ E_m-A_m $. If $ I_m $ is the number of competitions $ P_m $ has participated in, calculate the new $ R^{\prime}_m $ as
$ R^{\prime}_m = R_m+\frac{WK}{I_m+1}(E_m-A_m) $,
where $ K $ is a suitable positive constant for equating differences in ranking to differences in rating.
(4) We now update the volatility $ V_m $. If the player's performance is relatively close to the expected performance, we want $ V_m $ to decrease to imply stability of the player. On the other hand, if the player performs exceptionally well or poorly, we want $ V_m $ to increase to reflect inconsistency. We propose the following change:
$ V^{\prime}_m = V_m+WQ_1(|E_m-A_m|-Q) $,
where $ Q_1, Q_2 $ are suitable positive constants. $ Q_1 $ determines the magnitude of volatility change and $ Q_2 $ determines the threshold for which performance is considered relatively constant.
All in all, this rating system depends a lot on experimentally finding the constants to put into working use. Otherwise, it seems like it would work.
--------------------
Comment: Rating systems are always controversial and people are always trying to devise ways to get around it. On a lighter note, many restrictions can be added like a rating change cap or a volatility cap. Stabilizers for higher ratings are also a possibility (i.e. reducing the weight of each match). Any thoughts?
Monday, November 13, 2006
Step By Step. Topic: Calculus.
Problem #1: Show that, for all continuous, integrable functions $ f $ on the interval $ [a,b] $ (plus whatever other conditions you need), we have
$ \displaystyle \left(\int_a^b f(x)dx \right)^2 = \int_a^b \int_a^b f(x) \cdot f(y) dx dy $.
Problem #2: Show that, for all continuous, integrable functions $ f $ on the interval $ [0, \infty] $ (plus whatever other conditions you need), we have
$ \displaystyle \int_0^{\infty} \int_0^{\infty} f(x) \cdot f(y) dx dy = \int_0^{\frac{\pi}{2}} \int_0^{\infty} r \cdot g(r, \theta) dr d\theta $
where $ (r, \theta) $ are the polar coordinates for $ (x,y) $ and $ g(r, \theta) = f(x) \cdot f(y) $. Indeed, we notice that $ g(r, \theta) = f(r \cos{\theta}) \cdot f(r \sin{\theta}) $.
Problem #3: Use the above two properties to evaluate
$ \displaystyle \int_0^{\infty} e^{-x^2} dx $.
--------------------
Comment: Probably one of the coolest tricks in my limited calculus background. The first identity is nice to know and simple to prove, but the second is the really big leap that is useful for evaluating definite integrals which you cannot simply integrate. Hopefully, doing each of these problems, the method will become clear.
$ \displaystyle \left(\int_a^b f(x)dx \right)^2 = \int_a^b \int_a^b f(x) \cdot f(y) dx dy $.
Problem #2: Show that, for all continuous, integrable functions $ f $ on the interval $ [0, \infty] $ (plus whatever other conditions you need), we have
$ \displaystyle \int_0^{\infty} \int_0^{\infty} f(x) \cdot f(y) dx dy = \int_0^{\frac{\pi}{2}} \int_0^{\infty} r \cdot g(r, \theta) dr d\theta $
where $ (r, \theta) $ are the polar coordinates for $ (x,y) $ and $ g(r, \theta) = f(x) \cdot f(y) $. Indeed, we notice that $ g(r, \theta) = f(r \cos{\theta}) \cdot f(r \sin{\theta}) $.
Problem #3: Use the above two properties to evaluate
$ \displaystyle \int_0^{\infty} e^{-x^2} dx $.
--------------------
Comment: Probably one of the coolest tricks in my limited calculus background. The first identity is nice to know and simple to prove, but the second is the really big leap that is useful for evaluating definite integrals which you cannot simply integrate. Hopefully, doing each of these problems, the method will become clear.
Saturday, November 11, 2006
Simon Says Factor. Topic: Linear Algebra.
Problem: (2003 IMC Day 2 - #1) Let $ A $ and $ B $ be $ n \times n $ real matrices such that $ AB+B+A = 0 $. Prove that $ AB = BA $.
Solution: Well, seeing the $ AB+B+A $, we think Simon's Favorite Factoring Trick. But, we have to remember that we're working with matrices, not numbers. So instead of $ (x+1)(y+1) = xy+x+y+1 $, we have
$ (A+I_n)(B+I_n) = AB + I_nB+AI_n+I_n^2 $,
where $ I_n $ is the $ n \times n $ identity matrix (the equivalent of $ 1 $ in the matrix world). Recall that for any square matrix $ M $ we have $ MI = IM = M $ (where $ I $ is the identity matrix of corresponding size), so
$ AB + I_nB+AI_n+I_n^2 = AB+B+A+I_n = 0+I_n = I_n $
from the given condition. But the equation
$ (A+I_n)(B+I_n) = I_n $
means that $ A+I_n $ and $ B+I_n $ are inverses of each other. Which consequently means we can multiply them in either order. So
$ (A+I_n)(B+I_n) = (B+I_n)(A+I_n) $
$ AB+B+A+I_n = BA+A+B+I_n \Rightarrow AB = BA $
as desired. QED.
--------------------
Comment: Just basic algebraic manipulations if you knew several key properties of matrices and recognized the factoring. Always good to keep these things handy!
--------------------
Practice Problem: (1999 IMC Day 1 - #1) Show that for all natural numbers $ n $ there exists a real $ n \times n $ matrix $ A $ such that $ A^3 = A+I $. Furthermore, prove that $ |A| > 0 $ for all such $ A $. [Reworded]
Solution: Well, seeing the $ AB+B+A $, we think Simon's Favorite Factoring Trick. But, we have to remember that we're working with matrices, not numbers. So instead of $ (x+1)(y+1) = xy+x+y+1 $, we have
$ (A+I_n)(B+I_n) = AB + I_nB+AI_n+I_n^2 $,
where $ I_n $ is the $ n \times n $ identity matrix (the equivalent of $ 1 $ in the matrix world). Recall that for any square matrix $ M $ we have $ MI = IM = M $ (where $ I $ is the identity matrix of corresponding size), so
$ AB + I_nB+AI_n+I_n^2 = AB+B+A+I_n = 0+I_n = I_n $
from the given condition. But the equation
$ (A+I_n)(B+I_n) = I_n $
means that $ A+I_n $ and $ B+I_n $ are inverses of each other. Which consequently means we can multiply them in either order. So
$ (A+I_n)(B+I_n) = (B+I_n)(A+I_n) $
$ AB+B+A+I_n = BA+A+B+I_n \Rightarrow AB = BA $
as desired. QED.
--------------------
Comment: Just basic algebraic manipulations if you knew several key properties of matrices and recognized the factoring. Always good to keep these things handy!
--------------------
Practice Problem: (1999 IMC Day 1 - #1) Show that for all natural numbers $ n $ there exists a real $ n \times n $ matrix $ A $ such that $ A^3 = A+I $. Furthermore, prove that $ |A| > 0 $ for all such $ A $. [Reworded]
Thursday, November 9, 2006
Dimensions Everywhere. Topic: Linear Algebra.
Problem: (1998 IMC Day 1 - #1) Let $ V $ be a $ 10 $-dimensional real vector space and $ U_1, U_2 $ two linear subspaces such that $ U_1 \subseteq U_2 $, $ \dim U_1 = 3 $, and $ \dim U_2 = 6 $. Let $ \epsilon $ be the set of linear maps $ T: V \rightarrow V $ which have $ T(U_1) \subseteq U_1 $ and $ T(U_2) \subseteq U_2 $. Calculate the dimension of $ \epsilon $.
Solution: Well we have a total of $ 10 \cdot 10 = 100 $ degrees of freedom without the constraints. The constraint $ T(U_1) \subseteq U_1 $ takes away $ 3 \cdot 7 = 21 $ (since all three dimensions of $ U_1 $ have to map to $ U_1 $), while the constraint $ T(U_2) \subseteq U_2 $ takes away an additional $ 3 \cdot 4 = 12 $ (since the remaining three dimensions of $ U_2 $ have to map to $ U_2 $). Hence the total number of dimensions of linear maps is $ 100-21-12 = 67 $. QED.
--------------------
Comment: A pretty standard problem on linear transformations and linear algebra. Easy by most standards, since it really only involves some simple calculations.
--------------------
Practice Problem: What is the dimension of the span of the vectors $ (1, 2, 3) $, $ (2, 3, 4) $, and $ (0, -1, -2) $?
Solution: Well we have a total of $ 10 \cdot 10 = 100 $ degrees of freedom without the constraints. The constraint $ T(U_1) \subseteq U_1 $ takes away $ 3 \cdot 7 = 21 $ (since all three dimensions of $ U_1 $ have to map to $ U_1 $), while the constraint $ T(U_2) \subseteq U_2 $ takes away an additional $ 3 \cdot 4 = 12 $ (since the remaining three dimensions of $ U_2 $ have to map to $ U_2 $). Hence the total number of dimensions of linear maps is $ 100-21-12 = 67 $. QED.
--------------------
Comment: A pretty standard problem on linear transformations and linear algebra. Easy by most standards, since it really only involves some simple calculations.
--------------------
Practice Problem: What is the dimension of the span of the vectors $ (1, 2, 3) $, $ (2, 3, 4) $, and $ (0, -1, -2) $?
Wednesday, November 8, 2006
Tuesday, November 7, 2006
Rank Me! Topic: Linear Algebra.
Problem: (2005 IMC Day 1 - #1) Let $ A $ be a matrix such that $ A_{ij} = i+j $. Find the rank of $ A $.
Solution: Well we clearly have the $ k $th row of the matrix equal to
$ R_k = (k+1, k+2, \ldots, k+n) $.
So we just need to reduce this to row-echelon form through elementary operations. Well, we can take
$ R_k \rightarrow R_k-R_{k-1} $
for $ k \ge 2 $, so we now have
$ R_1 = (2, 3, \ldots, n+1) $ and $ R_k = (1, 1, \ldots 1) $
for $ k \ge 2 $. Now for all $ k > 2 $, take
$ R_k \rightarrow R_k-R_2 $
and take $ R_1 \rightarrow R_1-2R_2 $ so our matrix becomes
$ R_1 = (0, 1, \ldots, n-1) $, $ R_2 = (1, 1, \ldots, 1) $, and $ R_k = (0, 0, \ldots, 0) $
for $ k>2 $. Swapping $ R_1 $ and $ R_2 $ gives us row-echelon form, from which we can easily see that there are two pivot elements, which means the rank of $ A $ is two. Note that for $ k = 1 $ the rank is one. QED.
--------------------
Comment: Really not a difficult problem if you know any fundamental linear algebra and the operations you can perform on a matrix to calculate its rank. Linear algebra is super cool and even useful on competitions (when you get to college)!
--------------------
Practice Problem: For any matrix $ A $, prove that $ \text{rank}(A) = \text{rank}(A^T) $.
Solution: Well we clearly have the $ k $th row of the matrix equal to
$ R_k = (k+1, k+2, \ldots, k+n) $.
So we just need to reduce this to row-echelon form through elementary operations. Well, we can take
$ R_k \rightarrow R_k-R_{k-1} $
for $ k \ge 2 $, so we now have
$ R_1 = (2, 3, \ldots, n+1) $ and $ R_k = (1, 1, \ldots 1) $
for $ k \ge 2 $. Now for all $ k > 2 $, take
$ R_k \rightarrow R_k-R_2 $
and take $ R_1 \rightarrow R_1-2R_2 $ so our matrix becomes
$ R_1 = (0, 1, \ldots, n-1) $, $ R_2 = (1, 1, \ldots, 1) $, and $ R_k = (0, 0, \ldots, 0) $
for $ k>2 $. Swapping $ R_1 $ and $ R_2 $ gives us row-echelon form, from which we can easily see that there are two pivot elements, which means the rank of $ A $ is two. Note that for $ k = 1 $ the rank is one. QED.
--------------------
Comment: Really not a difficult problem if you know any fundamental linear algebra and the operations you can perform on a matrix to calculate its rank. Linear algebra is super cool and even useful on competitions (when you get to college)!
--------------------
Practice Problem: For any matrix $ A $, prove that $ \text{rank}(A) = \text{rank}(A^T) $.
Sunday, November 5, 2006
Oh Really... Topic: S&S/Sets. Level: AIME/Olympiad.
Problem: (1999 IMC Day 1 - #2) Does there exist a bijective map $ f(n): \mathbb{N} \rightarrow \mathbb{N} $ so that $ \displaystyle \sum_{n=1}^{\infty} \frac{f(n)}{n^2} $ is finite?
Solution: Consider the bijective map $ f_k(n): \{1,2, \ldots, k\} \rightarrow \{1,2, \ldots k\} $ and the sum $ \displaystyle S_k = \sum_{n=1}^k \frac{f_k(n)}{n^2} $. Since the sets
$ \{1, 2, \ldots, k \} $ and $ \{\frac{1}{1^2}, \frac{1}{2^2}, \cdots, \frac{1}{k^2} \} $
are oppositely ordered, by Rearrangement we have
$ \displaystyle S_k \ge \frac{1}{1^2}+\frac{2}{2^2}+\cdots+\frac{k}{k^2} = \sum_{n=1}^k \frac{1}{n} $.
Hence $ \displaystyle \lim_{k \rightarrow \infty} S_k \ge \lim_{k \rightarrow \infty} \sum_{n=1}^k \frac{1}{n} = \infty $, so the answer is no. QED.
--------------------
Comment: Basically, knowledge of the Rearragement inequality killed this problem. It's also useful to know what a bijective map is - a function $ f:A \rightarrow B $ where $ A $ and $ B $ are sets such that for any $ x, y \in A $ we have $ f(x) = f(y) \Rightarrow x = y $ and for any $ b \in B $ there exists $ x \in A $ such that $ f(x) = b $. A lot of higher level math is vocabulary and notation, which is sort of a pity, but that's just how it is.
--------------------
Practice Problem: Evaluate $ \displaystyle \int_0^{\infty} e^{-x^2}dx $.
Solution: Consider the bijective map $ f_k(n): \{1,2, \ldots, k\} \rightarrow \{1,2, \ldots k\} $ and the sum $ \displaystyle S_k = \sum_{n=1}^k \frac{f_k(n)}{n^2} $. Since the sets
$ \{1, 2, \ldots, k \} $ and $ \{\frac{1}{1^2}, \frac{1}{2^2}, \cdots, \frac{1}{k^2} \} $
are oppositely ordered, by Rearrangement we have
$ \displaystyle S_k \ge \frac{1}{1^2}+\frac{2}{2^2}+\cdots+\frac{k}{k^2} = \sum_{n=1}^k \frac{1}{n} $.
Hence $ \displaystyle \lim_{k \rightarrow \infty} S_k \ge \lim_{k \rightarrow \infty} \sum_{n=1}^k \frac{1}{n} = \infty $, so the answer is no. QED.
--------------------
Comment: Basically, knowledge of the Rearragement inequality killed this problem. It's also useful to know what a bijective map is - a function $ f:A \rightarrow B $ where $ A $ and $ B $ are sets such that for any $ x, y \in A $ we have $ f(x) = f(y) \Rightarrow x = y $ and for any $ b \in B $ there exists $ x \in A $ such that $ f(x) = b $. A lot of higher level math is vocabulary and notation, which is sort of a pity, but that's just how it is.
--------------------
Practice Problem: Evaluate $ \displaystyle \int_0^{\infty} e^{-x^2}dx $.
Saturday, November 4, 2006
Pick And Choose. Topic: Combinatorics. Level: AMC.
Problem: (2006 Math Is Cool Championships) How many rectangles can you make in a $ 5 \times 6 $ grid? [Reworded]
Solution #1: Here's the "harder" solution, involving counting and subtracting. Consider all $ 42 $ points (since there are $ 5 $ and $ 6 $ sides). Pick any point first; there are $ 42 $ possible. Then pick any point not on the same horizontal/vertical lines. There are $ 42-6-7+1 = 30 $ of these. Form a rectangle with these as opposite vertices. We get $ 42*30 $ rectangles.
But wait, each rectangle has four vertices so we overcount by $ 4 $ times (pick vertices $ 1-3 $, $ 3-1 $, $ 2-4 $, $ 4-2 $). So divide by $ 4 $ and get $ 315 $. QED.
--------------------
Solution #2: Here's the "clever" solution. We will pick four lines - two vertical and two horizontal. It's not hard to see that each set of four lines will determine a unique rectangle (formed by the four intersection points). Hence the result is
$ 6C2 \cdot 7C2 = 15 \cdot 21 = 315 $.
QED.
--------------------
Comment: The first is probably more practical in a competition setting, but the second is probably more generally applicable and more educational. In any case, it's good to see two approaches to the same problem.
--------------------
Practice Problem: (2006 Math Is Cool Championships) How many integers are in the range of the function $ f(x) = \frac{4x^2+75}{2x^2+3} $?
Solution #1: Here's the "harder" solution, involving counting and subtracting. Consider all $ 42 $ points (since there are $ 5 $ and $ 6 $ sides). Pick any point first; there are $ 42 $ possible. Then pick any point not on the same horizontal/vertical lines. There are $ 42-6-7+1 = 30 $ of these. Form a rectangle with these as opposite vertices. We get $ 42*30 $ rectangles.
But wait, each rectangle has four vertices so we overcount by $ 4 $ times (pick vertices $ 1-3 $, $ 3-1 $, $ 2-4 $, $ 4-2 $). So divide by $ 4 $ and get $ 315 $. QED.
--------------------
Solution #2: Here's the "clever" solution. We will pick four lines - two vertical and two horizontal. It's not hard to see that each set of four lines will determine a unique rectangle (formed by the four intersection points). Hence the result is
$ 6C2 \cdot 7C2 = 15 \cdot 21 = 315 $.
QED.
--------------------
Comment: The first is probably more practical in a competition setting, but the second is probably more generally applicable and more educational. In any case, it's good to see two approaches to the same problem.
--------------------
Practice Problem: (2006 Math Is Cool Championships) How many integers are in the range of the function $ f(x) = \frac{4x^2+75}{2x^2+3} $?
Friday, November 3, 2006
Wednesday, November 1, 2006
The Cheat Is To The Limit. Topic: Calculus.
Problem: (1997 IMC Day 1 - #1) Let $ \{e_n\}^{\infty}_{n=1} $ be a sequence of positive reals with $ \displaystyle \lim_{n \rightarrow \infty} e_n = 0 $. Find
$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} $.
Solution: First of all, you should notice that this looks a lot like a Reimann Sum, so maybe we can transform it into an integral. In fact, this will solve the problem. First, bound the limit from below by
$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \ge \int^1_0 \ln{x}dx = [x\ln{x}-x]^1_0 = -1 $.
We then also have
$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \le \int_0^1 \ln{(x+e)} dx = [(x+e)\ln{(x+e)}-(x+e)]^1_0 = (1+e)\ln(1+e)-(1+e) $
since for $ n $ big enough that we have $ e_n \le e $. Thus we can take $ e \rightarrow 0 $ as $ e_n \rightarrow 0 $. This means
$ \displaystyle -1 \le \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \le -1+(1+e)\ln(1+e)-e $.
Since the upper bound and lower bound both converge to $ -1 $, the limit is $ -1 $. QED.
--------------------
Comment: A classic example of Reimann Sums, which every calculus student should be familiar with (otherwise they aren't really learning calculus...). Applying some simple bounding techniques gives us the answer pretty easily.
--------------------
Practice Problem: (1997 IMC Day 1 - #2) Let $ a_n $ be a sequence of reals. Suppose $ \displaystyle \sum a_n $ converges. Do these sums converge as well?
(a) $ a_1+a_2+(a_4+a_3)+(a_8+\cdots+a_5)+(a_{16}+\cdots+a_9)+\cdots $
(b) $ {a_1+a_2+(a_3)+(a_4)+(a_5+a_7)+(a_6+a_8)+(a_9+a_{11}+a_{13}+a_{15}) +(a_{10}+a_{12}+a_{14}+a_{16})+\cdots $
$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} $.
Solution: First of all, you should notice that this looks a lot like a Reimann Sum, so maybe we can transform it into an integral. In fact, this will solve the problem. First, bound the limit from below by
$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \ge \int^1_0 \ln{x}dx = [x\ln{x}-x]^1_0 = -1 $.
We then also have
$ \displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \le \int_0^1 \ln{(x+e)} dx = [(x+e)\ln{(x+e)}-(x+e)]^1_0 = (1+e)\ln(1+e)-(1+e) $
since for $ n $ big enough that we have $ e_n \le e $. Thus we can take $ e \rightarrow 0 $ as $ e_n \rightarrow 0 $. This means
$ \displaystyle -1 \le \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n \ln{\left(\frac{k}{n}+e_n\right)} \le -1+(1+e)\ln(1+e)-e $.
Since the upper bound and lower bound both converge to $ -1 $, the limit is $ -1 $. QED.
--------------------
Comment: A classic example of Reimann Sums, which every calculus student should be familiar with (otherwise they aren't really learning calculus...). Applying some simple bounding techniques gives us the answer pretty easily.
--------------------
Practice Problem: (1997 IMC Day 1 - #2) Let $ a_n $ be a sequence of reals. Suppose $ \displaystyle \sum a_n $ converges. Do these sums converge as well?
(a) $ a_1+a_2+(a_4+a_3)+(a_8+\cdots+a_5)+(a_{16}+\cdots+a_9)+\cdots $
(b) $ {a_1+a_2+(a_3)+(a_4)+(a_5+a_7)+(a_6+a_8)+(a_9+a_{11}+a_{13}+a_{15}) +(a_{10}+a_{12}+a_{14}+a_{16})+\cdots $
Subscribe to:
Posts (Atom)